scispace - formally typeset
Search or ask a question

Showing papers by "University of Virginia published in 2003"


Journal ArticleDOI
TL;DR: The Unified Theory of Acceptance and Use of Technology (UTAUT) as mentioned in this paper is a unified model that integrates elements across the eight models, and empirically validate the unified model.
Abstract: Information technology (IT) acceptance research has yielded many competing models, each with different sets of acceptance determinants. In this paper, we (1) review user acceptance literature and discuss eight prominent models, (2) empirically compare the eight models and their extensions, (3) formulate a unified model that integrates elements across the eight models, and (4) empirically validate the unified model. The eight models reviewed are the theory of reasoned action, the technology acceptance model, the motivational model, the theory of planned behavior, a model combining the technology acceptance model and the theory of planned behavior, the model of PC utilization, the innovation diffusion theory, and the social cognitive theory. Using data from four organizations over a six-month period with three points of measurement, the eight models explained between 17 percent and 53 percent of the variance in user intentions to use information technology. Next, a unified model, called the Unified Theory of Acceptance and Use of Technology (UTAUT), was formulated, with four core determinants of intention and usage, and up to four moderators of key relationships. UTAUT was then tested using the original data and found to outperform the eight individual models (adjusted R2 of 69 percent). UTAUT was then confirmed with data from two new organizations with similar results (adjusted R2 of 70 percent). UTAUT thus provides a useful tool for managers needing to assess the likelihood of success for new technology introductions and helps them understand the drivers of acceptance in order to proactively design interventions (including training, marketing, etc.) targeted at populations of users that may be less inclined to adopt and use new systems. The paper also makes several recommendations for future research including developing a deeper understanding of the dynamic influences studied here, refining measurement of the core constructs used in UTAUT, and understanding the organizational outcomes associated with new technology use.

27,798 citations


Posted Content
TL;DR: TAUT provides a useful tool for managers needing to assess the likelihood of success for new technology introductions and helps them understand the drivers of acceptance in order to proactively design interventions targeted at populations of users that may be less inclined to adopt and use new systems.
Abstract: Information technology (IT) acceptance research has yielded many competing models, each with different sets of acceptance determinants. In this paper, we: (1) review user acceptance literature and discuss eight prominent models, (2) empirically compare the eight models and their extensions, (3) formulate a unified model that integrates elements across the eight models, and (4) empirically validate the unified model. The eight models reviewed are the theory of reasoned action, the technology acceptance model, the motivational model, the theory of planned behavior, a model combining the technology acceptance model and the theory of planned behavior, the model of PC utilization, the innovation diffusion theory, and the social cognitive theory. Using data from four organizations over a six-month period with three points of measurement, the eight models explained between 17 percent and 53 percent of the variance in user intentions to use information technology. Next, a unified model, called the Unified Theory of Acceptance and Use of Technology (UTAUT), was formulated, with four core determinants of intention and usage, and up to four moderators of key relationships. UTAUT was then tested using the original data and found to outperform the eight individual models (adjusted R2 of 69 percent). UTAUT was then confirmed with data from two new organizations with similar results (adjusted R2 of 70 percent). UTAUT thus provides a useful tool for managers needing to assess the likelihood of success for new technology introductions and helps them understand the drivers of acceptance in order to proactively design interventions (including training, marketing, etc.) targeted at populations of users that may be less inclined to adopt and use new systems. The paper also makes several recommendations for future research including developing a deeper understanding of the dynamic influences studied here, refining measurement of the core constructs used in UTAUT, and understanding the organizational outcomes associated with new technology use.

5,658 citations


Journal ArticleDOI
05 Dec 2003-Science
TL;DR: The mechanisms underlying the major steps of migration and the signaling pathways that regulate them are described, and recent advances investigating the nature of polarity in migrating cells and the pathways that establish it are outlined.
Abstract: Cell migration is a highly integrated multistep process that orchestrates embryonic morphogenesis; contributes to tissue repair and regeneration; and drives disease progression in cancer, mental retardation, atherosclerosis, and arthritis. The migrating cell is highly polarized with complex regulatory pathways that spatially and temporally integrate its component processes. This review describes the mechanisms underlying the major steps of migration and the signaling pathways that regulate them, and outlines recent advances investigating the nature of polarity in migrating cells and the pathways that establish it.

4,839 citations


Journal Article

2,609 citations


Journal ArticleDOI
04 Jan 2003-BMJ
TL;DR: If medical journals adopt the STARD checklist and flow diagram, the quality of reporting of studies of diagnostic accuracy should improve to the advantage of clinicians, researchers, reviewers, journals, and the public.
Abstract: Objective: To improve the accuracy and completeness of reporting of studies of diagnostic accuracy, to allow readers to assess the potential for bias in a study, and to evaluate a study9s generalisability. Methods: The Standards for Reporting of Diagnostic Accuracy (STARD) steering committee searched the literature to identify publications on the appropriate conduct and reporting of diagnostic studies and extracted potential items into an extensive list. Researchers, editors, and members of professional organisations shortened this list during a two day consensus meeting, with the goal of developing a checklist and a generic flow diagram for studies of diagnostic accuracy. Results: The search for published guidelines about diagnostic research yielded 33 previously published checklists, from which we extracted a list of 75 potential items. At the consensus meeting, participants shortened the list to a 25 item checklist, by using evidence, whenever available. A prototype of a flow diagram provides information about the method of patient recruitment, the order of test execution, and the numbers of patients undergoing the test under evaluation and the reference standard, or both. Conclusions: Evaluation of research depends on complete and accurate reporting. If medical journals adopt the STARD checklist and flow diagram, the quality of reporting of studies of diagnostic accuracy should improve to the advantage of clinicians, researchers, reviewers, journals, and the public. The Standards for Reporting of Diagnostic Accuracy (STARD) steering group aims to improve the accuracy and completeness of reporting of studies of diagnostic accuracy. The group describes and explains the development of a checklist and flow diagram for authors of reports

2,550 citations


Journal ArticleDOI
TL;DR: Overexpression of this bone metastasis gene set is superimposed on a poor-prognosis gene expression signature already present in the parental breast cancer population, suggesting that metastasis requires a set of functions beyond those underlying the emergence of the primary tumor.

2,493 citations


Proceedings ArticleDOI
14 Sep 2003
TL;DR: In this paper, the authors present APIT, a novel localization algorithm that is range-free, which performs best when an irregular radio pattern and random node placement are considered, and low communication overhead is desired.
Abstract: Wireless Sensor Networks have been proposed for a multitude of location-dependent applications. For such systems, the cost and limitations of the hardware on sensing nodes prevent the use of range-based localization schemes that depend on absolute point-to-point distance estimates. Because coarse accuracy is sufficient for most sensor network applications, solutions in range-free localization are being pursued as a cost-effective alternative to more expensive range-based approaches. In this paper, we present APIT, a novel localization algorithm that is range-free. We show that our APIT scheme performs best when an irregular radio pattern and random node placement are considered, and low communication overhead is desired. We compare our work via extensive simulation, with three state-of-the-art range-free localization schemes to identify the preferable system configurations of each. In addition, we study the effect of location error on routing and tracking performance. We show that routing performance and tracking accuracy are not significantly affected by localization error when the error is less than 0.4 times the communication radio radius.

2,461 citations


Journal ArticleDOI
TL;DR: The research agenda for the future includes establishing the role of insulin resistance and abnormal lipoprotein metabolism in NASH, determining the pathogenesis of cellular injury, defining predisposing genetic abnormalities, identifying better noninvasive predictors of disease, and defining effective therapy.

2,134 citations


Journal ArticleDOI
TL;DR: A formal model of information seeking is proposed in which the probability of seeking information from another person is a function of knowing what that person knows; valuing what thatperson knows; being able to gain timely access to that person's thinking; and perceiving thatseeking information from that person would not be too costly.
Abstract: Research in organizational learning has demonstrated processes and occasionally performance implications of acquisition of declarative (know-what) and procedural (know-how) knowledge. However, considerably less attention has been paid to learned characteristics of relationships that affect the decision to seek information from other people. Based on a review of the social network, information processing, and organizational learning literatures, along with the results of a previous qualitative study, we propose a formal model of information seeking in which the probability of seeking information from another person is a function of (1) knowing what that person knows; (2) valuing what that person knows; (3) being able to gain timely access to that person's thinking; and (4) perceiving that seeking information from that person would not be too costly. We also hypothesize that the knowing, access, and cost variables mediate the relationship between physical proximity and information seeking. The model is tested using two separate research sites to provide replication. The results indicate strong support for the model and the mediation hypothesis (with the exception of the cost variable). Implications are drawn for the study of both transactive memory and organizational learning, as well as for management practice.

2,042 citations


Journal ArticleDOI
TL;DR: It is suggested that the RhoA/ROK pathway is constitutively active in a number of organs under physiological conditions; its aberrations play major roles in several disease states, particularly impacting on Ca2+ sensitization of smooth muscle in hypertension and possibly asthma and on cancer neoangiogenesis and cancer progression.
Abstract: Somlyo, Andrew P., and Avril V. Somlyo. Ca2+ Sensitivity of Smooth Muscle and Nonmuscle Myosin II: Modulated by G Proteins, Kinases, and Myosin Phosphatase. Physiol Rev 83: 1325-1358, 2003; 10.1152...

1,923 citations


01 Jan 2003
TL;DR: In many of the world's religious traditions, the good go up, to heaven or a higher rebirth, and the bad go down, to hell or a lower rebirth as mentioned in this paper.
Abstract: Morality dignifies and elevates. When Adam and Eve ate the forbidden fruit, God said "Behold, the man is become as one of us, to know good and evil" (Gen. 3:22). In many of the world's religious traditions, the good go up, to heaven or a higher rebirth, and the bad go down, to hell or a lower rebirth. Even among secular people, moral motives are spoken of as the "highest" and "noblest" motives, whereas greed and lust are regarded as "baser" or "lower" instincts. Morality is therefore like the temple on the hill of human nature: It is our most sacred attribute, a trait that is often said to separate us from other animals and bring us closer to God. For 2,400 years, the temple has been occupied by the high priests of reason. Plato (4th century B.C./1949) presented a model of a divided self in which reason, firmly ensconced in the head, rules over the passions, which rumble around in the chest and stomach (Timaeus, 69). Aristotle had a similar conception of reason as the wise master and emotion as the foolish slave: "anger seems to listen to reason, but to hear wrong, like hasty servants, who run off before they have heard everything their master tells them, and fail to do what they were ordered, or like dogs, which bark as soon as there is a knock without waiting to see if the visitor is a friend" (Ethics, 1962, 1149a). Throughout the long history of moral philosophy, the focus has generally been on moral reasoning, whereas the moral emotions have been regarded with some suspicion (Solomon, 1993). Even when moral psychology finally separated itself from moral philosophy and began to make its own empir-

Journal ArticleDOI
TL;DR: The focus of this paper is aircraft and aircraft engines but the broader focus is on the role of materials in creating lightweight structures, and there are examples used that are relevant to automotive applications once they are adjusted for cost.

Journal ArticleDOI
TL;DR: Current estimates of the global burden of disease for diarrhoea are reported and compared with previous estimates made using data collected in 1954-79 and 1980-89, finding that the total morbidity component of the disease burden is greater than previously.
Abstract: Current estimates of the global burden of disease for diarrhoea are reported and compared with previous estimates made using data collected in 1954-79 and 1980-89. A structured literature review was used to identify studies that characterized morbidity rates by prospective surveillance of stable populations and studies that characterized mortality attributable to diarrhoea through active surveillance. For children under 5 years of age in developing areas and countries, there was a median of 3.2 episodes of diarrhoea per child-year. This indicated little change from previously described incidences. Estimates of mortality revealed that 4.9 children per 1000 per year in these areas and countries died as a result of diarrhoeal illness in the first 5 years of life, a decline from the previous estimates of 13.6 and 5.6 per 1000 per year. The decrease was most pronounced in children aged under 1 year. Despite improving trends in mortality rates, diarrhoea accounted for a median of 21% of all deaths of children aged under 5 years in these areas and countries, being responsible for 2.5 million deaths per year. There has not been a concurrent decrease in morbidity rates attributable to diarrhoea. As population growth is focused in the poorest areas, the total morbidity component of the disease burden is greater than previously.

Journal ArticleDOI
TL;DR: The goal of this review is to provide a comprehensive description of T-type currents, their distribution, regulation, pharmacology, and cloning.
Abstract: T-type Ca2+ channels were originally called low-voltage-activated (LVA) channels because they can be activated by small depolarizations of the plasma membrane. In many neurons Ca2+ influx through L...

Journal ArticleDOI
TL;DR: The theoretical basis for modeling univariate traffic condition data streams as seasonal autoregressive integrated moving average processes as well as empirical results using actual intelligent transportation system data are presented and found to be consistent with the theoretical hypothesis.
Abstract: This article presents the theoretical basis for modeling univariate traffic condition data streams as seasonal autoregressive integrated moving average processes. This foundation rests on the Wold decomposition theorem and on the assertion that a one-week lagged first seasonal difference applied to discrete interval traffic condition data will yield a weakly stationary transformation. Moreover, empirical results using actual intelligent transportation system data are presented and found to be consistent with the theoretical hypothesis. Conclusions are given on the implications of these assertions and findings relative to ongoing intelligent transportation systems research, deployment, and operations.

Proceedings ArticleDOI
19 May 2003
TL;DR: SPEED is a highly efficient and scalable protocol for sensor networks where the resources of each node are scarce, and specifically tailored to be a stateless, localized algorithm with minimal control overhead.
Abstract: In this paper, we present a real-time communication protocol for sensor networks, called SPEED. The protocol provides three types of real-time communication services, namely, real-time unicast, real-time area-multicast and real-time area-anycast. SPEED is specifically tailored to be a stateless, localized algorithm with minimal control overhead End-to-end soft real-time communication is achieved by maintaining a desired delivery speed across the sensor network through a novel combination of feedback control and non-deterministic geographic forwarding. SPEED is a highly efficient and scalable protocol for sensor networks where the resources of each node are scarce. Theoretical analysis, simulation experiments and a real implementation on Berkeley motes are provided to validate our claims.

Journal ArticleDOI
TL;DR: This explanatory document aims to facilitate the use, understanding, and dissemination of the checklist and contains a clarification of the meaning, rationale, and optimal use of each item on the checklist.
Abstract: The quality of reporting of studies of diagnostic accuracy is less than optimal. Complete and accurate reporting is necessary to enable readers to assess the potential for bias in the study and to evaluate the generalisability of the results. A group of scientists and editors has developed the STARD (Standards for Reporting of Diagnostic Accuracy) statement to improve the reporting the quality of reporting of studies of diagnostic accuracy. The statement consists of a checklist of 25 items and flow diagram that authors can use to ensure that all relevant information is present. This explanatory document aims to facilitate the use, understanding and dissemination of the checklist. The document contains a clarification of the meaning, rationale and optimal use of each item on the checklist, as well as a short summary of the available evidence on bias and applicability. The STARD statement, checklist, flowchart and this explanation and elaboration document should be useful resources to improve reporting of diagnostic accuracy studies. Complete and informative reporting can only lead to better decisions in healthcare.

Journal ArticleDOI
TL;DR: The anthropogenic era is generally thought to have begun 150 to 200 years ago, when the industrial revolution began producing CO2 and CH4 at rates sufficient to alter their compositions in the atmosphere as discussed by the authors.
Abstract: The anthropogenic era is generally thought to have begun 150 to 200 years ago, when the industrial revolution began producing CO2 and CH4 at rates sufficient to alter their compositions in the atmosphere. A different hypothesis is posed here: anthropogenic emissions of these gases first altered atmospheric concentrations thousands of years ago. This hypothesis is based on three arguments. (1) Cyclic variations in CO2 and CH4 driven by Earth-orbital changes during the last 350,000 years predict decreases throughout the Holocene, but the CO2 trend began an anomalous increase 8000 years ago, and the CH4 trend did so 5000 years ago. (2) Published explanations for these mid- to late-Holocene gas increases based on natural forcing can be rejected based on paleocli- matic evidence. (3) A wide array of archeological, cultural, historical and geologic evidence points to viable explanations tied to anthropogenic changes resulting from early agriculture in Eurasia, including the start of forest clearance by 8000 years ago and of rice irrigation by 5000 years ago. In recent millennia, the estimated warming caused by these early gas emissions reached a global-mean value of ∼0.8 ◦ C and roughly 2 ◦ C at high latitudes, large enough to have stopped a glaciation of northeastern Canada predicted by two kinds of climatic models. CO2 oscillations of ∼10 ppm in the last 1000 years are too large to be explained by external (solar-volcanic) forcing, but they can be explained by outbreaks of bubonic plague that caused historically documented farm abandonment in western Eurasia. Forest regrowth on abandoned farms sequestered enough carbon to account for the observed CO2 decreases. Plague-driven CO2 changes were also a significant causal factor in temperature changes during the Little Ice Age (1300-1900 AD).

Proceedings ArticleDOI
01 May 2003
TL;DR: HotSpot is described, an accurate yet fast model based on an equivalent circuit of thermal resistances and capacitances that correspond to microarchitecture blocks and essential aspects of the thermal package that shows that power metrics are poor predictors of temperature, and that sensor imprecision has a substantial impact on the performance of DTM.
Abstract: With power density and hence cooling costs rising exponentially, processor packaging can no longer be designed for the worst case, and there is an urgent need for runtime processor-level techniques that can regulate operating temperature when the package's capacity is exceeded. Evaluating such techniques, however, requires a thermal model that is practical for architectural studies.This paper describes HotSpot, an accurate yet fast model based on an equivalent circuit of thermal resistances and capacitances that correspond to microarchitecture blocks and essential aspects of the thermal package. Validation was performed using finite-element simulation. The paper also introduces several effective methods for dynamic thermal management (DTM): "temperature-tracking" frequency scaling, localized toggling, and migrating computation to spare hardware units. Modeling temperature at the microarchitecture level also shows that power metrics are poor predictors of temperature, and that sensor imprecision has a substantial impact on the performance of DTM.

Journal ArticleDOI
TL;DR: Different layers of cross-talk between several components of this complex regulatory system are emerging, and these epigenetic circuits are the focus of this review.

Journal ArticleDOI
TL;DR: Results demonstrate that the proportions of IQ variance attributable to genes and environment vary nonlinearly with SES, and suggest that in impoverished families, 60% of the variance in IQ is accounted for by the shared environment, and the contribution of genes is close to zero; in affluent families, the result is almost exactly the reverse.
Abstract: Scores on the Wechsler Intelligence Scale for Children were analyzed in a sample of 7-year-old twins from the National Col- laborative Perinatal Project. A substantial proportion of the twins were raised in families living near or below the poverty level. Biometric analyses were conducted using models allowing for components attrib- utable to the additive effects of genotype, shared environment, and non- shared environment to interact with socioeconomic status (SES) measured as a continuous variable. Results demonstrate that the proportions of IQ variance attributable to genes and environment vary nonlinearly with SES. The models suggest that in impoverished families, 60% of the vari- ance in IQ is accounted for by the shared environment, and the contri- bution of genes is close to zero; in affluent families, the result is almost exactly the reverse.

Journal ArticleDOI
TL;DR: It is suggested that two appraisals are central and are present in all clear cases of awe: perceived vastness, and a need for accommodation, defined as an inability to assimilate an experience into current mental structures.
Abstract: In this paper we present a prototype approach to awe. We suggest that two appraisals are central and are present in all clear cases of awe: perceived vastness, and a need for accommodation, defined as an inability to assimilate an experience into current mental structures. Five additional appraisals account for variation in the hedonic tone of awe experiences: threat, beauty, exceptional ability, virtue, and the supernatural. We derive this perspective from a review of what has been written about awe in religion, philosophy, sociology, and psychology, and then we apply this perspective to an analysis of awe and related states such as admiration, elevation, and the epiphanic experience.

Journal ArticleDOI
TL;DR: In this article, the Sagittarius (Sgr) dwarf galaxy was mapped by M-giant star tracers detected in the complete Two Micron All Sky Survey (2MASS).
Abstract: We present the first all-sky view of the Sagittarius (Sgr) dwarf galaxy mapped by M-giant star tracers detected in the complete Two Micron All Sky Survey (2MASS). Near-infrared photometry of Sgr's prominent M-giant population permits an unprecedentedly clear view of the center of Sgr. The main body is fitted with a King profile of limiting major-axis radius 30°—substantially larger than previously found or assumed—beyond which is a prominent break in the density profile from stars in the Sgr tidal tails; thus the Sgr radial profile resembles that of Galactic dwarf speroidal (dSph) satellites. Adopting traditional methods for analyzing dSph light profiles, we determine the brightness of the main body of Sgr to be MV = -13.27 (the brightest of the known Galactic dSph galaxies) and the total Sgr mass-to-light ratio to be 25 in solar units. However, we regard the latter result with suspicion and argue that much of the observed structure beyond the King-fit core radius (224') may be outside the actual Sgr tidal radius as the former dwarf spiral/irregular satellite undergoes catastrophic disruption during its last orbits. The M-giant distribution of Sgr exhibits a central density cusp at the same location as, but not due to, the old stars constituting the globular cluster M54. A striking trailing tidal tail is found to extend from the Sgr center and arc across the south Galactic hemisphere with approximately constant density and mean distance varying from ~20 to 40 kpc. A prominent leading debris arm extends from the Sgr center northward of the Galactic plane to an apogalacticon ~45 kpc from the Sun and then turns toward the north Galactic cap (NGC), from where it descends back toward the Galactic plane, becomes foreshortened, and, at brighter magnitudes, covers the NGC. The leading and trailing Sgr tails lie along a well-defined orbital plane about the Galactic center. The Sun lies within a kiloparsec of that plane and near the path of leading Sgr debris; thus, it is possible that former Sgr stars are near or in the solar neighborhood. We discuss the implications of this new view of the Sgr galaxy and its entrails for the character of the Sgr orbit, mass, mass-loss rate, and contribution of stars to the Milky Way halo. The minimal precession displayed by the Sgr tidal debris along its inclined orbit supports the notion of a nearly spherical Galactic potential. The number of M giants in the Sgr tails is at least 15% that contained within the King limiting radius of the main Sgr body. The fact that M giants, presumably formed within the past few gigayears in the Sgr nucleus, are nevertheless so widespread along the Sgr tidal arms not only places limits on the dynamical age of these arms but also poses a timing problem that bears on the recent binding energy of the Sgr core and that is most naturally explained by recent and catastrophic mass loss. Sgr appears to contribute more than 75% of the high-latitude, halo M giants, despite substantial reservoirs of M giants in the Magellanic Clouds. No evidence of extended M-giant tidal debris from the Magellanic Clouds is found. Generally good correspondence is found between the M-giant, all-sky map of the Sgr system and all previously published detections of potential Sgr debris, with the exception of Sgr carbon stars, which must be subluminous compared with counterparts in other Galactic satellites in order to resolve the discrepancy.

Journal ArticleDOI
TL;DR: It is shown that the chromodomain proteins Polycomb (Pc) and HP1 (heterochromatin protein 1) are highly discriminatory for binding to these sites in vivo and in vitro, and a role for their Chromodomains in both target site binding and discrimination is indicated.
Abstract: On the histone H3 tail, Lys 9 and Lys 27 are both methylation sites associated with epigenetic repression, and reside within a highly related sequence motif ARKS. Here we show that the chromodomain proteins Polycomb (Pc) and HP1 (heterochromatin protein 1) are highly discriminatory for binding to these sites in vivo and in vitro. In Drosophila S2 cells, and on polytene chromosomes, methyl-Lys 27 and Pc are both excluded from areas that are enriched in methyl-Lys 9 and HP1. Swapping of the chromodomain regions of Pc and HP1 is sufficient for switching the nuclear localization patterns of these factors, indicating a role for their chromodomains in both target site binding and discrimination. To better understand the molecular basis for the selection of methyl-lysine binding sites, we solved the 1.8 A structure of the Pc chromodomain in complex with a H3 peptide bearing trimethyl-Lys 27, and compared it with our previously determined structure of the HP1 chromodomain in complex with a H3 peptide bearing trimethyl-Lys 9. The Pc chromodomain distinguishes its methylation target on the H3 tail via an extended recognition groove that binds five additional residues preceding the ARKS motif.

Journal ArticleDOI
TL;DR: The first all-sky view of the Sagittarius (Sgr) dwarf galaxy mapped by M giant star tracers detected in the complete Two Micron All-Sky Survey (2MASS) was presented in this paper.
Abstract: We present the first all-sky view of the Sagittarius (Sgr) dwarf galaxy mapped by M giant star tracers detected in the complete Two Micron All-Sky Survey (2MASS) The main body is fit with a King profile of 30 deg limiting radius, but with a break in the density profile from stars in tidal tails We argue that much of the observed structure beyond the 224' core radius may be unbound as the satellite undergoes catastrophic disruption A striking, >150 deg trailing tidal tail extends from the Sgr center and arcs across the South Galactic Hemisphere A prominent leading debris arm extends from the Sgr center northward of the Galactic plane to an ~40 kpc apoGalacticon, loops towards the North Galactic Cap (NGC) and descends back towards the Galactic plane, foreshortened and covering the NGC The Sgr tails lie along a well-defined orbital plane that shows little precession, which supports the notion of a nearly spherical Galactic potential The Sun lies near the path of leading Sgr debris; thus, former Sgr stars may be near or in the solar neighborhood The number of M giants in the Sgr tails is >15% that within the King limiting radius of the Sgr center That several gigayear old M giants are so widespread along the Sgr tidal arms not only places limits on the dynamical age of these arms but poses a timing problem that bears on the recent binding energy of the Sgr core and that is naturally explained by recent and catastrophic mass loss Sgr appears to contribute >75% of the high latitude, halo M giants; no evidence for M giant tidal debris from the Magellanic Clouds is found Generally good correspondence is found between the M giant, all-sky map of the Sgr system and all previously published detections of potential Sgr debris with the exception of Sgr carbon stars -- which must be subluminous to resolve the discrepancy

Journal ArticleDOI
TL;DR: The results indicate that circulating activated platelets and platelet–leukocyte/monocyte aggregates promote formation of atherosclerotic lesions.
Abstract: We studied whether circulating activated platelets and platelet-leukocyte aggregates cause the development of atherosclerotic lesions in apolipoprotein-E-deficient (Apoe(-/-)) mice. Circulating activated platelets bound to leukocytes, preferentially monocytes, to form platelet-monocyte/leukocyte aggregates. Activated platelets and platelet-leukocyte aggregates interacted with atherosclerotic lesions. The interactions of activated platelets with monocytes and atherosclerotic arteries led to delivery of the platelet-derived chemokines CCL5 (regulated on activation, normal T cell expressed and secreted, RANTES) and CXCL4 (platelet factor 4) to the monocyte surface and endothelium of atherosclerotic arteries. The presence of activated platelets promoted leukocyte binding of vascular cell adhesion molecule-1 (VCAM-1) and increased their adhesiveness to inflamed or atherosclerotic endothelium. Injection of activated wild-type, but not P-selectin-deficient, platelets increased monocyte arrest on the surface of atherosclerotic lesions and the size of atherosclerotic lesions in Apoe(-/-) mice. Our results indicate that circulating activated platelets and platelet-leukocyte/monocyte aggregates promote formation of atherosclerotic lesions. This role of activated platelets in atherosclerosis is attributed to platelet P-selectin-mediated delivery of platelet-derived proinflammatory factors to monocytes/leukocytes and the vessel wall.

Journal ArticleDOI
TL;DR: Neuraxial anesthesia and analgesia provide several advantages over systemic opioids, including superior analgesia, reduced blood loss and need for transfusion, decreased incidence of graft occlusion, and improved joint mobility following major knee surgery.

Journal ArticleDOI
TL;DR: In this article, the authors employ a Bayesian dynamic latent factor model to estimate common components in macroeconomic aggregates (output, consumption, and investment) in a 60-country sample covering seven regions of the world.
Abstract: The paper investigates the common dynamic properties of business-cycle fluctuations across countries, regions, and the world. We employ a Bayesian dynamic latent factor model to estimate common components in macroeconomic aggregates (output, consumption, and investment) in a 60-country sample covering seven regions of the world. The results indicate that a common world factor is an important source of volatility for aggregates in most countries, providing evidence for a world business cycle. We find that region-specific factors play only a minor role in explaining fluctuations in economic activity. We also document similarities and differences across regions, countries, and aggregates. (JEL F41, E32, C11, C32)

Journal ArticleDOI
TL;DR: FRET is a powerful technique for studying molecular interactions inside living cells with improved spatial (angstrom) and temporal (nanosecond) resolution, distance range, and sensitivity and a broader range of biological applications.
Abstract: The current advances in fluorescence microscopy, coupled with the development of new fluorescent probes, make fluorescence resonance energy transfer (FRET) a powerful technique for studying molecular interactions inside living cells with improved spatial (angstrom) and temporal (nanosecond) resolution, distance range, and sensitivity and a broader range of biological applications.

Journal ArticleDOI
TL;DR: Corporate brand management is a dynamic process that involves keeping up with continuous adjustments of vision, culture and image as discussed by the authors, and it is important to bring the whole corporation into corporate branding.
Abstract: This paper describes corporate branding as an organisational tool whose successful application depends on attending to the strategic, organisational and communicational context in which it is used A model to help managers analyse context in terms of the alignment between strategic vision, organisational culture and corporate image is presented The model is based on a gap analysis, which enables managers to assess the coherence of their corporate brand Use of the model is illustrated by examining the stages of development that British Airways passed through in the creation of its corporate brand The paper concludes that corporate brand management is a dynamic process that involves keeping up with continuous adjustments of vision, culture and image The model suggests an approach to corporate branding that is organisationally integrated and cross‐functional, hence the thesis that it is important to bring the (whole) corporation into corporate branding