scispace - formally typeset
Search or ask a question

Showing papers by "University of Southern California published in 2005"


Proceedings ArticleDOI
22 Aug 2005
TL;DR: A new routing scheme, called Spray and Wait, that "sprays" a number of copies into the network, and then "waits" till one of these nodes meets the destination, which outperforms all existing schemes with respect to both average message delivery delay and number of transmissions per message delivered.
Abstract: Intermittently connected mobile networks are sparse wireless networks where most of the time there does not exist a complete path from the source to the destination. These networks fall into the general category of Delay Tolerant Networks. There are many real networks that follow this paradigm, for example, wildlife tracking sensor networks, military networks, inter-planetary networks, etc. In this context, conventional routing schemes would fail.To deal with such networks researchers have suggested to use flooding-based routing schemes. While flooding-based schemes have a high probability of delivery, they waste a lot of energy and suffer from severe contention, which can significantly degrade their performance. Furthermore, proposed efforts to significantly reduce the overhead of flooding-based schemes have often be plagued by large delays. With this in mind, we introduce a new routing scheme, called Spray and Wait, that "sprays" a number of copies into the network, and then "waits" till one of these nodes meets the destination.Using theory and simulations we show that Spray and Wait outperforms all existing schemes with respect to both average message delivery delay and number of transmissions per message delivered; its overall performance is close to the optimal scheme. Furthermore, it is highly scalable retaining good performance under a large range of scenarios, unlike other schemes. Finally, it is simple to implement and to optimize in order to achieve given performance goals in practice.

2,712 citations


Journal ArticleDOI
TL;DR: In this article, a scale to measure the strength of consumers' emotional attachments to brands has been devised, and the scale is positively associated with indicators of both commitment and investment, as well as with satisfaction, involvement, and brand attitudes.

2,143 citations


Journal ArticleDOI
TL;DR: The authors describe qualitative research as "inquiry aimed at describing and clarifying human experience as it appears in people's lives." Qualitative data are gathered primarily in the form of spoken or written descriptions.
Abstract: Qualitative research is inquiry aimed at describing and clarifying human experience as it appears in people's lives. Researchers using qualitative methods gather data that serve as evidence for their distilled descriptions. Qualitative data are gathered primarily in the form of spoken or written lan

2,067 citations


Journal ArticleDOI
TL;DR: Gestational diabetes mellitus (GDM) is defined as glucose intolerance of various degrees that is first detected during pregnancy and provides a unique opportunity to study the early pathogenesis of diabetes and to develop interventions to prevent the disease.
Abstract: Gestational diabetes mellitus (GDM) is defined as glucose intolerance of various degrees that is first detected during pregnancy. GDM is detected through the screening of pregnant women for clinical risk factors and, among at-risk women, testing for abnormal glucose tolerance that is usually, but not invariably, mild and asymptomatic. GDM appears to result from the same broad spectrum of physiological and genetic abnormalities that characterize diabetes outside of pregnancy. Indeed, women with GDM are at high risk for having or developing diabetes when they are not pregnant. Thus, GDM provides a unique opportunity to study the early pathogenesis of diabetes and to develop interventions to prevent the disease.

1,960 citations


Journal ArticleDOI
TL;DR: It is argued that addicted people become unable to make drug-use choices on the basis of long-term outcome, and a neural framework is proposed that explains this myopia for future consequences.
Abstract: Here I argue that addicted people become unable to make drug-use choices on the basis of long-term outcome, and I propose a neural framework that explains this myopia for future consequences. I suggest that addiction is the product of an imbalance between two separate, but interacting, neural systems that control decision making: an impulsive, amygdala system for signaling pain or pleasure of immediate prospects, and a reflective, prefrontal cortex system for signaling pain or pleasure of future prospects. After an individual learns social rules, the reflective system controls the impulsive system via several mechanisms. However, this control is not absolute; hyperactivity within the impulsive system can override the reflective system. I propose that drugs can trigger bottom-up, involuntary signals originating from the amygdala that modulate, bias or even hijack the goal-driven cognitive resources that are needed for the normal operation of the reflective system and for exercising the willpower to resist drugs.

1,906 citations


Journal Article
TL;DR: To regain relevancy, business schools must rediscover the practice of business and find a way to balance the dual mission of educating practitioners and creating knowledge through research.
Abstract: Business schools are facing intense criticism for failing to impart useful skills, failing to prepare leaders, failing to instill norms of ethical behavior--and even failing to lead graduates to good corporate jobs. These criticisms come not just from students, employers, and the media but also from deans of some of America's most prestigious B schools. The root cause oftoday's crisis in management education, assert Warren G. Bennis and James O'Toole, is that business schools have adopted an inappropriate--and ultimately self-defeating--model of academic excellence. Instead of measuring themselves in terms of the competence of their graduates, or by how well their faculty members understand important drivers of business performance, they assess themselves almost solely by the rigor of their scientific research. This scientific model is predicated on the faulty assumption that business is an academic discipline like chemistry or geology when, in fact, business is a profession and business schools are professional schools--or should be. Business school deans may claim that their schools remain focused on practice, but they nevertheless hire and promote research-oriented professors who haven't spent time working in companies and are more comfortable teaching methodology than messy, multidisciplinary issues--the very stuff of management. The authors don't advocate a return to the days when business schools were glorified trade schools. But to regain relevancy, they say, business schools must rediscover the practice of business and find a way to balance the dual mission of educating practitioners and creating knowledge through research.

1,885 citations


Posted ContentDOI
TL;DR: The authors proposed a modified version of Swamy's test of slope homogeneity for panel data models where the cross section dimension (N) could be large relative to the time series dimension (T).
Abstract: This paper proposes a modified version of Swamy's test of slope homogeneity for panel data models where the cross section dimension (N) could be large relative to the time series dimension (T). The proposed test exploits the cross section dispersion of individual slopes weighted by their relative precision. In the case of models with strictly exogenous regressors and normally distributed errors, the test is shown to have a standard normal distribution. Using Monte Carlo experiments, it is shown that the test has the correct size and satisfactory power in panels with strictly exogenous regressors for various combinations of N and T. For autoregressive (AR) models the proposed test performs well for moderate values of the root of the autoregressive process. But for AR models with roots near unity a bias-corrected bootstrapped version of the test is proposed which performs well even if N is large relative to T. The proposed cross section dispersion tests are applied to testing the homogeneity of slopes in autoregressive models of individual earnings using the PSID data. The results show statistically significant evidence of slope heterogeneity in the earnings dynamics, even when individuals with similar educational backgrounds are considered as sub-sets.

1,751 citations


Journal ArticleDOI
TL;DR: Leveraging technology from the visual simulation and virtual reality communities, serious games provide a delivery system for organizational video game instruction and training.
Abstract: During the past decades, the virtual reality community has based its development on a synthesis of earlier work in interactive 3D graphics, user interfaces, and visual simulation. Currently, the VR field is transitioning into work influenced by video games. Because much of the research and development being conducted in the games community parallels the VR community's efforts, it has the potential to affect a greater audience. Given these trends, VR researchers who want their work to remain relevant must realign to focus on game research and development. Leveraging technology from the visual simulation and virtual reality communities, serious games provide a delivery system for organizational video game instruction and training.

1,685 citations


Journal ArticleDOI
19 Oct 2005-JAMA
TL;DR: Atypical antipsychotic drugs may be associated with a small increased risk for death compared with placebo, and this risk should be considered within the context of medical need for the drugs, efficacy evidence, medical comorbidity, and the efficacy and safety of alternatives.
Abstract: ContextAtypical antipsychotic medications are widely used to treat delusions, aggression, and agitation in people with Alzheimer disease and other dementia; however, concerns have arisen about the increased risk for cerebrovascular adverse events, rapid cognitive decline, and mortality with their use.ObjectiveTo assess the evidence for increased mortality from atypical antipsychotic drug treatment for people with dementia.Data SourcesMEDLINE (1966 to April 2005), the Cochrane Controlled Trials Register (2005, Issue 1), meetings presentations (1997-2004), and information from the sponsors were searched using the terms for atypical antipsychotic drugs (aripiprazole, clozapine, olanzapine, quetiapine, risperidone, and ziprasidone), dementia, Alzheimer disease, and clinical trial.Study SelectionPublished and unpublished randomized placebo-controlled, parallel-group clinical trials of atypical antipsychotic drugs marketed in the United States to treat patients with Alzheimer disease or dementia were selected by consensus of the authors.Data ExtractionTrials, baseline characteristics, outcomes, all-cause dropouts, and deaths were extracted by one reviewer; treatment exposure was obtained or estimated. Data were checked by a second reviewer.Data SynthesisFifteen trials (9 unpublished), generally 10 to 12 weeks in duration, including 16 contrasts of atypical antipsychotic drugs with placebo met criteria (aripiprazole [n = 3], olanzapine [n = 5], quetiapine [n = 3], risperidone [n = 5]). A total of 3353 patients were randomized to study drug and 1757 were randomized to placebo. Outcomes were assessed using standard methods (with random- or fixed-effects models) to calculate odds ratios (ORs) and risk differences based on patients randomized and relative risks based on total exposure to treatment. There were no differences in dropouts. Death occurred more often among patients randomized to drugs (118 [3.5%] vs 40 [2.3%]. The OR by meta-analysis was 1.54; 95% confidence interval [CI], 1.06-2.23; P = .02; and risk difference was 0.01; 95% CI, 0.004-0.02; P = .01). Sensitivity analyses did not show evidence for differential risks for individual drugs, severity, sample selection, or diagnosis.ConclusionsAtypical antipsychotic drugs may be associated with a small increased risk for death compared with placebo. This risk should be considered within the context of medical need for the drugs, efficacy evidence, medical comorbidity, and the efficacy and safety of alternatives. Individual patient analyses modeling survival and causes of death are needed.

1,534 citations


Journal ArticleDOI
TL;DR: The purpose of this document is to provide a single resource on current standards of care pertaining specifically to children and adolescents with type 1 diabetes.
Abstract: During recent years, the American Diabetes Association (ADA) has published detailed guidelines and recommendations for the management of diabetes in the form of technical reviews, position statements, and consensus statements. Recommendations regarding children and adolescents have generally been included as only a minor portion of these documents. For example, the most recent ADA position statement on “Standards of Medical Care for Patients With Diabetes Mellitus” (last revised October 2003) included “special considerations” for children and adolescents (1). Other position statements included age-specific recommendations for screening for nephropathy (2) and retinopathy (3) in children with diabetes. In addition, the ADA has published guidelines pertaining to certain aspects of diabetes that apply exclusively to children and adolescents, including care of children with diabetes at school (4) and camp (5) and a consensus statement on type 2 diabetes in children and adolescents (6). The purpose of this document is to provide a single resource on current standards of care pertaining specifically to children and adolescents with type 1 diabetes. It is not meant to be an exhaustive compendium on all aspects of the management of pediatric diabetes. However, relevant references are provided and current works in progress are indicated as such. The information provided is based on evidence from published studies whenever possible and, when not, supported by expert opinion or consensus (7). Several excellent detailed guidelines and chapters on type 1 diabetes in pediatric endocrinology texts exist, including those by the International Society of Pediatric and Adolescent Diabetes (ISPAD) (8), by the Australian Pediatric Endocrine Group (www.chw.edu/au/prof/services/endocrinology/apeg), in Lifshitz’s Pediatric Endocrinology (9–11), and by Plotnick and colleagues (12,13). Children have characteristics and needs that dictate different standards of care. The management of diabetes in children must take the major differences between children of various ages and …

1,339 citations


Journal ArticleDOI
TL;DR: The results of improving application performance through workflow restructuring which clusters multiple tasks in a workflow into single entities are presented.
Abstract: This paper describes the Pegasus framework that can be used to map complex scientific workflows onto distributed resources. Pegasus enables users to represent the workflows at an abstract level without needing to worry about the particulars of the target execution systems. The paper describes general issues in mapping applications and the functionality of Pegasus. We present the results of improving application performance through workflow restructuring which clusters multiple tasks in a workflow into single entities. A real-life astronomy application is used as the basis for the study.

Journal ArticleDOI
01 Jul 2005-Stroke
TL;DR: A novel endovascular embolectomy device can significantly restore vascular patency during acute ischemic stroke within 8 hours of stroke symptom onset and provides an alternative intervention for patients who are otherwise ineligible for thrombolytics.
Abstract: Background and Purpose—The only Food and Drug Administration (FDA)-approved treatment for acute ischemic stroke is tissue plasminogen activator (tPA) given intravenously within 3 hours of symptom onset. An alternative strategy for opening intracranial vessels during stroke is mechanical embolectomy, especially for patients ineligible for intravenous tPA. Methods—We investigated the safety and efficacy of a novel embolectomy device (Merci Retriever) to open occluded intracranial large vessels within 8 hours of the onset of stroke symptoms in a prospective, nonrandomized, multicenter trial. All patients were ineligible for intravenous tPA. Primary outcomes were recanalization and safety, and secondary outcomes were neurological outcome at 90 days in recanalized versus nonrecanalized patients. Results—Recanalization was achieved in 46% (69/151) of patients on intention to treat analysis, and in 48% (68/141) of patients in whom the device was deployed. This rate is significantly higher than that expected using an historical control of 18% (P0.0001). Clinically significant procedural complications occurred in 10 of 141 (7.1%) patients. Symptomatic intracranial hemorrhages was observed in 11 of 141 (7.8%) patients. Good neurological outcomes (modified Rankin score 2) were more frequent at 90 days in patients with successful recanalization compared with patients with unsuccessful recanalization (46% versus 10%; relative risk [RR], 4.4; 95% CI, 2.1 to 9.3; P0.0001), and mortality was less (32% versus 54%; RR, 0.59; 95% CI, 0.39 to 0.89; P0.01). Conclusions—A novel endovascular embolectomy device can significantly restore vascular patency during acute ischemic stroke within 8 hours of stroke symptom onset and provides an alternative intervention for patients who are otherwise ineligible for thrombolytics. (Stroke. 2005;36:1432-1440.)

Journal ArticleDOI
TL;DR: In this paper, the authors show that the fraction of publicly traded industrial firms that pay dividends is high when retained earnings are a large portion of total equity (and of total assets) and falls to near zero when most equity is contributed rather than earned.
Abstract: Consistent with a lifecycle theory of dividends, the fraction of publicly traded industrial firms that pays dividends is high when retained earnings are a large portion of total equity (and of total assets) and falls to near zero when most equity is contributed rather than earned. We observe a highly significant relation between the decision to pay dividends and the earned/contributed capital mix, controlling for profitability, growth, firm size, leverage, cash balances, and dividend history, a relation that also holds for dividend initiations and omissions. In our regressions, the mix of earned/contributed capital has a quantitatively greater impact than measures of profitability and growth opportunities. We document a massive increase in firms with negative retained earnings (from 11.8% of industrials in 1978 to 50.2% in 2002). Controlling for the earned/contributed capital mix, firms with negative retained earnings show virtually no change in their propensity to pay dividends from the mid-1970s to 2002, while those whose earned equity makes them reasonable candidates to pay dividends have a propensity reduction that is twice the overall reduction in Fama and French (2001). All our evidence supports the lifecycle theory of dividends, in which a firm's stage in that cycle is well-proxied by its mix of internal and external capital.

Book ChapterDOI
11 Jul 2005
TL;DR: A natural and general model of influence propagation that is generalizing models used in the sociology and economics communities, and shows that in the decreasing cascade model, a natural greedy algorithm is a 1-1/ e-e approximation for selecting a target set of size k.
Abstract: We study the problem of maximizing the expected spread of an innovation or behavior within a social network, in the presence of “word-of-mouth” referral. Our work builds on the observation that individuals’ decisions to purchase a product or adopt an innovation are strongly influenced by recommendations from their friends and acquaintances. Understanding and leveraging this influence may thus lead to a much larger spread of the innovation than the traditional view of marketing to individuals in isolation. In this paper, we define a natural and general model of influence propagation that we term the decreasing cascade model, generalizing models used in the sociology and economics communities. In this model, as in related ones, a behavior spreads in a cascading fashion according to a probabilistic rule, beginning with a set of initially “active” nodes. We study the target set selection problem: we wish to choose a set of individuals to target for initial activation, such that the cascade beginning with this active set is as large as possible in expectation. We show that in the decreasing cascade model, a natural greedy algorithm is a 1-1/ e-e approximation for selecting a target set of size k.

Journal ArticleDOI
TL;DR: In this article, a review of models for assessing intraurban exposure under six classes, including proximity-based assessments, statistical interpolation, land use regression models, line dispersion models, integrated emission-meteorological models, and hybrid models combining personal or household exposure monitoring with one of the preceding methods is presented.
Abstract: The development of models to assess air pollution exposures within cities for assignment to subjects in health studies has been identified as a priority area for future research. This paper reviews models for assessing intraurban exposure under six classes, including: (i) proximity-based assessments, (ii) statistical interpolation, (iii) land use regression models, (iv) line dispersion models, (v) integrated emission-meteorological models, and (vi) hybrid models combining personal or household exposure monitoring with one of the preceding methods. We enrich this review of the modelling procedures and results with applied examples from Hamilton, Canada. In addition, we qualitatively evaluate the models based on key criteria important to health effects assessment research. Hybrid models appear well suited to overcoming the problem of achieving population representative samples while understanding the role of exposure variation at the individual level. Remote sensing and activity-space analysis will complement refinements in pre-existing methods, and with expected advances, the field of exposure assessment may help to reduce scientific uncertainties that now impede policy intervention aimed at protecting public health.

Journal ArticleDOI
TL;DR: Air pollution is associated with a broad spectrum of acute and chronic health effects, the nature of which may vary with the pollutant constituents, and particulate air pollution is consistently and independently related to the most serious effects, including lung cancer and other cardiopulmonary mortality.
Abstract: As part of the World Health Organization (WHO) Global Burden of Disease Comparative Risk Assessment, the burden of disease attributable to urban ambient air pollution was estimated in terms of deaths and disability-adjusted life years (DALYs). Air pollution is associated with a broad spectrum of acute and chronic health effects, the nature of which may vary with the pollutant constituents. Particulate air pollution is consistently and independently related to the most serious effects, including lung cancer and other cardiopulmonary mortality. The analyses on which this report is based estimate that ambient air pollution, in terms of fine particulate air pollution (PM(2.5)), causes about 3% of mortality from cardiopulmonary disease, about 5% of mortality from cancer of the trachea, bronchus, and lung, and about 1% of mortality from acute respiratory infections in children under 5 yr, worldwide. This amounts to about 0.8 million (1.2%) premature deaths and 6.4 million (0.5%) years of life lost (YLL). This burden occurs predominantly in developing countries; 65% in Asia alone. These estimates consider only the impact of air pollution on mortality (i.e., years of life lost) and not morbidity (i.e., years lived with disability), due to limitations in the epidemiologic database. If air pollution multiplies both incidence and mortality to the same extent (i.e., the same relative risk), then the DALYs for cardiopulmonary disease increase by 20% worldwide.

Journal ArticleDOI
TL;DR: The data support the utility of A. thaliana as a model for evolutionary functional genomics and suggest there is a genome-wide excess of rare alleles and too much variation between genomic regions in the level of polymorphism.
Abstract: We resequenced 876 short fragments in a sample of 96 individuals of Arabidopsis thaliana that included stock center accessions as well as a hierarchical sample from natural populations. Although A. thaliana is a selfing weed, the pattern of polymorphism in general agrees with what is expected for a widely distributed, sexually reproducing species. Linkage disequilibrium decays rapidly, within 50 kb. Variation is shared worldwide, although population structure and isolation by distance are evident. The data fail to fit standard neutral models in several ways. There is a genome-wide excess of rare alleles, at least partially due to selection. There is too much variation between genomic regions in the level of polymorphism. The local level of polymorphism is negatively correlated with gene density and positively correlated with segmental duplications. Because the data do not fit theoretical null distributions, attempts to infer natural selection from polymorphism data will require genome-wide surveys of polymorphism in order to identify anomalous regions. Despite this, our data support the utility of A. thaliana as a model for evolutionary functional genomics.

Journal ArticleDOI
TL;DR: The results suggest the chronic health effects associated with within-city gradients in exposure to PM2.5 may be even larger than previously reported across metropolitan areas, and nearly 3 times greater than in models relying on comparisons between communities.
Abstract: Background:The assessment of air pollution exposure using only community average concentrations may lead to measurement error that lowers estimates of the health burden attributable to poor air quality. To test this hypothesis, we modeled the association between air pollution and mortality using sma

Journal ArticleDOI
TL;DR: This study uncovers and examines the variety of supply chain partnership configurations that exist based on differences in capability platforms, reflecting varying processes and information systems, and uses the absorptive capacity lens to build a conceptual framework that links these configurations with partner-enabled market knowledge creation.
Abstract: The need for continual value innovation is driving supply chains to evolve from a pure transactional focus to leveraging interorganizational partner ships for sharing information and, ultimately, market knowledge creation. Supply chain partners are (1) engaging in interlinked processes that enable rich (broad-ranging, high quality, and privileged) information sharing, and (2) building information technology infrastructures that allow them to process information obtained from their partners to create new knowledge. This study uncovers and examines the variety of supply chain partnership configurations that exist based on differences in capability platforms, reflecting varying processes and information systems. We use the absorptive capacity lens to build a conceptual framework that links these configurations with partner-enabled market knowledge creation. Absorptive capacity refers to the set of organizational routines and processes by which organizations acquire, assimilate, transform, and exploit knowledge to produce dynamic organizational capabilities. Through an exploratory field study conducted in the context of the RosettaNet consortium effort in the IT industry supply chain, we use cluster analysis to uncover and characterize five supply chain partnership configurations (collectors, connectors, crunchers, coercers, and collaborators). We compare their partner-enabled knowledge creation and operational efficiency, as well as the shortcomings in their capability platforms and the nature of information exchange. Through the characterization of each of the configurations, we are able to derive research propositions focused on enterprise absorptive capacity elements. These propositions provide insight into how partner-enabled market knowledge creation and operational efficiency can be affected, and highlight the interconnected roles of coordination information and rich information. The paper concludes by drawing implications for research and practice from the uncovering of these configurations and the resultant research propositions. It also highlights fertile opportunities for advances in research on knowledge management through the study of supply chain contexts and other interorganizational partnering arrangements.

Journal ArticleDOI
TL;DR: This paper explores the detection of domain-specific emotions using language and discourse information in conjunction with acoustic correlates of emotion in speech signals on a case study of detecting negative and non-negative emotions using spoken language data obtained from a call center application.
Abstract: The importance of automatically recognizing emotions from human speech has grown with the increasing role of spoken language interfaces in human-computer interaction applications. This paper explores the detection of domain-specific emotions using language and discourse information in conjunction with acoustic correlates of emotion in speech signals. The specific focus is on a case study of detecting negative and non-negative emotions using spoken language data obtained from a call center application. Most previous studies in emotion recognition have used only the acoustic information contained in speech. In this paper, a combination of three sources of information-acoustic, lexical, and discourse-is used for emotion recognition. To capture emotion information at the language level, an information-theoretic notion of emotional salience is introduced. Optimization of the acoustic correlates of emotion with respect to classification error was accomplished by investigating different feature sets obtained from feature selection, followed by principal component analysis. Experimental results on our call center data show that the best results are obtained when acoustic and language information are combined. Results show that combining all the information, rather than using only acoustic information, improves emotion classification by 40.7% for males and 36.4% for females (linear discriminant classifier used for acoustic information).

Journal ArticleDOI
27 Jul 2005-JAMA
TL;DR: Clinical parameters (PSADT, pathological Gleason score, and time from surgery to biochemical recurrence) can help risk stratify patients for prostate cancer–specific mortality following biochemical recurrent after radical prostatectomy.
Abstract: ContextThe natural history of biochemical recurrence after radical prostatectomy can be long but variable. Better risk assessment models are needed to identify men who are at high risk for prostate cancer death early and who may benefit from aggressive salvage treatment and to identify men who are at low risk for prostate cancer death and can be safely observed.ObjectivesTo define risk factors for prostate cancer death following radical prostatectomy and to develop tables to risk stratify for prostate cancer–specific survival.Design, Setting, and PatientsRetrospective cohort study of 379 men who had undergone radical prostatectomy at an urban tertiary care hospital between 1982 and 2000 and who had a biochemical recurrence and after biochemical failure had at least 2 prostate-specific antigen (PSA) values at least 3 months apart in order to calculate PSA doubling time (PSADT). The mean (SD) follow-up after surgery was 10.3 (4.7) years and median follow-up was 10 years (range, 1-20 years).Main Outcome MeasureProstate cancer–specific mortality.ResultsMedian survival had not been reached after 16 years of follow-up after biochemical recurrence. Prostate-specific doubling time (<3.0 vs 3.0-8.9 vs 9.0-14.9 vs ≥15.0 months), pathological Gleason score (≤7 vs 8-10), and time from surgery to biochemical recurrence (≤3 vs >3 years) were all significant risk factors for time to prostate-specific mortality. Using these 3 variables, tables were constructed to estimate the risk of prostate cancer–specific survival at year 15 after biochemical recurrence.ConclusionClinical parameters (PSADT, pathological Gleason score, and time from surgery to biochemical recurrence) can help risk stratify patients for prostate cancer–specific mortality following biochemical recurrence after radical prostatectomy. These preliminary findings may serve as useful guides to patients and their physicians to identify patients at high risk for prostate cancer–specific mortality following biochemical recurrence after radical prostatectomy to enroll them in early aggressive treatment trials. In addition, these preliminary findings highlight that survival in low-risk patients can be quite prolonged.

Journal ArticleDOI
TL;DR: The current understanding of the pathophysiology of experimental drug hepatotoxicity is examined, focusing on acetaminophen, particularly with respect to the role of the innate immune system and control of cell-death pathways, which might provide targets for exploration and identification of risk factors and mechanisms in humans.
Abstract: The occurrence of idiosyncratic drug hepatotoxicity is a major problem in all phases of clinical drug development and the most frequent cause of post-marketing warnings and withdrawals This review examines the clinical signatures of this problem, signals predictive of its occurrence (particularly of more frequent, reversible, low-grade injury) and the role of monitoring in prevention by examining several recent examples (for example, troglitazone) In addition, the failure of preclinical toxicology to predict idiosyncratic reactions, and what can be done to improve this problem, is discussed Finally, our current understanding of the pathophysiology of experimental drug hepatotoxicity is examined, focusing on acetaminophen, particularly with respect to the role of the innate immune system and control of cell-death pathways, which might provide targets for exploration and identification of risk factors and mechanisms in humans

Journal ArticleDOI
TL;DR: In this paper, the authors examine three-day cumulative abnormal returns around the announcement of 702 newly appointed outside directors assigned to audit committees during a period before implementation of the Sarbanes-Oxley Act (SOX).
Abstract: We examine three-day cumulative abnormal returns around the announcement of 702 newly appointed outside directors assigned to audit committees during a period before implementation of the Sarbanes-Oxley Act (SOX). Motivated by the SOX requirement that public companies disclose whether they have a financial expert on their audit committee, we test whether the market reacts favorably to the appointment of directors with financial expertise to the audit committee. In addition, because it is controversial whether SOX should define financial experts narrowly to include primarily accounting financial experts (as initially proposed) or more broadly to include nonaccounting financial experts (as ultimately passed), we separately examine appointments of each type of expert. We find a positive market reaction to the appointment of accounting financial experts assigned to audit committees but no reaction to nonaccounting financial experts assigned to audit committees, consistent with accounting-based financial skills, but not broader financial skills, improving the audit committee's ability to ensure high-quality financial reporting. In addition, we find that this positive reaction is concentrated among firms with relatively strong corporate governance, consistent with accounting financial expertise complementing strong governance, possibly because strong governance helps channel the expertise toward enhancing shareholder value. Together, these findings are consistent with financial expertise on audit committees improving corporate governance but only when both the expert and the appointing firm possess characteristics that facilitate the effective use of the expertise.

Proceedings ArticleDOI
17 Oct 2005
TL;DR: The human detection problem is formulated as maximum a posteriori (MAP) estimation, and edgelet features are introduced, which are a new type of silhouette oriented features that are learned by a boosting method.
Abstract: This paper proposes a method for human detection in crowded scene from static images. An individual human is modeled as an assembly of natural body parts. We introduce edgelet features, which are a new type of silhouette oriented features. Part detectors, based on these features, are learned by a boosting method. Responses of part detectors are combined to form a joint likelihood model that includes cases of multiple, possibly inter-occluded humans. The human detection problem is formulated as maximum a posteriori (MAP) estimation. We show results on a commonly used previous dataset as well as new data sets that could not be processed by earlier methods.

Journal ArticleDOI
TL;DR: The results suggest that AECs undergo EMT when chronically exposed to TGF-beta1, raising the possibility that epithelial cells may serve as a novel source of myofibroblasts in IPF.
Abstract: The hallmark of idiopathic pulmonary fibrosis (IPF) is the myofibroblast, the cellular origin of which in the lung is unknown. We hypothesized that alveolar epithelial cells (AECs) may serve as a source of myofibroblasts through epithelial-mesenchymal transition (EMT). Effects of chronic exposure to transforming growth factor (TGF)-β1 on the phenotype of isolated rat AECs in primary culture and a rat type II cell line (RLE-6TN) were evaluated. Additionally, tissue samples from patients with IPF were evaluated for cells co-expressing epithelial (thyroid transcription factor (TTF)-1 and pro-surfactant protein-B (pro-SP-B), and mesenchymal (α-smooth muscle actin (α-SMA)) markers. RLE-6TN cells exposed to TGF-β1 for 6 days demonstrated increased expression of mesenchymal cell markers and a fibroblast-like morphology, an effect augmented by tumor necrosis factor-α (TNF-α). Exposure of rat AECs to TGF-β1 (100 pmol/L) resulted in increased expression of α-SMA, type I collagen, vimentin, and desmin, with concurrent transition to a fibroblast-like morphology and decreased expression of TTF-1, aquaporin-5 (AQP5), zonula occludens-1 (ZO-1), and cytokeratins. Cells co-expressing epithelial markers and α-SMA were abundant in lung tissue from IPF patients. These results suggest that AECs undergo EMT when chronically exposed to TGF-β1, raising the possibility that epithelial cells may serve as a novel source of myofibroblasts in IPF.

Journal ArticleDOI
01 Apr 2005-Methods
TL;DR: In the following report, several methods to measure GRP78 induction are presented, which can be achieved by measuring the Grp78 promoter activity or by measuringThe level of Grp 78 transcripts or GRp78 protein.

Book
01 Jul 2005
TL;DR: This paper presents a meta-analyses of correlation in cyclic Hadamard sequences and its applications to radar, sonar, and synchronization, and describes the properties of correlation as well as applications to Boolean functions.
Abstract: This book provides a comprehensive description of the methodologies and the application areas, throughout the range of digital communication, in which individual signals and sets of signals with favorable correlation properties play a central role. The necessary mathematical background is presented to explain how these signals are generated, and to show how they satisfy the appropriate correlation constraints. All the known methods to obtain balanced binary sequences with two-valued autocorrelation, many of them only recently discovered, are presented in depth. The authors treat important application areas including: Code Division Multiple Access (CDMA) signals, such as those already in widespread use for cell-phone communication, and planned for universal adoption in the various approaches to 'third-generation'(3G) cell-phone use; systems for coded radar and sonar signals; communication signals to minimize mutual interference ('cross-talk') in multi-user environments; and pseudo-random sequence generation for secure authentication and for stream cipher cryptology.

Journal ArticleDOI
TL;DR: The need for and methods of UFP exposure assessment are discussed, which may lead to systemic inflammation through oxidative stress responses to reactive oxygen species and thereby promote the progression of atherosclerosis and precipitate acute cardiovascular responses ranging from increased blood pressure to myocardial infarction.
Abstract: Numerous epidemiologic time-series studies have shown generally consistent associations of cardiovascular hospital admissions and mortality with outdoor air pollution, particularly mass concentrations of particulate matter (PM) < or = 2.5 or < or = 10 microm in diameter (PM2.5, PM10). Panel studies with repeated measures have supported the time-series results showing associations between PM and risk of cardiac ischemia and arrhythmias, increased blood pressure, decreased heart rate variability, and increased circulating markers of inflammation and thrombosis. The causal components driving the PM associations remain to be identified. Epidemiologic data using pollutant gases and particle characteristics such as particle number concentration and elemental carbon have provided indirect evidence that products of fossil fuel combustion are important. Ultrafine particles < 0.1 microm (UFPs) dominate particle number concentrations and surface area and are therefore capable of carrying large concentrations of adsorbed or condensed toxic air pollutants. It is likely that redox-active components in UFPs from fossil fuel combustion reach cardiovascular target sites. High UFP exposures may lead to systemic inflammation through oxidative stress responses to reactive oxygen species and thereby promote the progression of atherosclerosis and precipitate acute cardiovascular responses ranging from increased blood pressure to myocardial infarction. The next steps in epidemiologic research are to identify more clearly the putative PM casual components and size fractions linked to their sources. To advance this, we discuss in a companion article (Sioutas C, Delfino RJ, Singh M. 2005. Environ Health Perspect 113:947-955) the need for and methods of UFP exposure assessment.

Journal ArticleDOI
TL;DR: Novel counterregulatory responses in inflammation initiated via RvE1 receptor activation that provide the first evidence for EPA-derived potent endogenous agonists of antiinflammation are demonstrated.
Abstract: The essential fatty acid eicosapentaenoic acid (EPA) present in fish oils displays beneficial effects in a range of human disorders associated with inflammation including cardiovascular disease. Resolvin E1 (RvE1), a new bioactive oxygenated product of EPA, was identified in human plasma and prepared by total organic synthesis. Results of bioaction and physical matching studies indicate that the complete structure of RvE1 is 5S,12R,18R-trihydroxy-6Z,8E,10E,14Z,16E-EPA. At nanomolar levels, RvE1 dramatically reduced dermal inflammation, peritonitis, dendritic cell (DC) migration, and interleukin (IL) 12 production. We screened receptors and identified one, denoted earlier as ChemR23, that mediates RvE1 signal to attenuate nuclear factor-kappaB. Specific binding of RvE1 to this receptor was confirmed using synthetic [(3)H]-labeled RvE1. Treatment of DCs with small interference RNA specific for ChemR23 sharply reduced RvE1 regulation of IL-12. These results demonstrate novel counterregulatory responses in inflammation initiated via RvE1 receptor activation that provide the first evidence for EPA-derived potent endogenous agonists of antiinflammation.

Proceedings ArticleDOI
29 Aug 2005
TL;DR: Social assistive robotics as mentioned in this paper is a research area focusing on assisting people through social interaction, which has been studied extensively in the last few decades. But there is no clear definition of socially assistive robots.
Abstract: This paper defines the research area of socially assistive robotics, focusing on assisting people through social interaction. While much attention has been paid to robots that provide assistance to people through physical contact (which we call contact assistive robotics), and to robots that entertain through social interaction (social interactive robotics), so far there is no clear definition of socially assistive robotics. We summarize active social assistive research projects and classify them by target populations, application domains, and interaction methods. While distinguishing these from socially interactive robotics endeavors, we discuss challenges and opportunities that are specific to the growing field of socially assistive robotics.