scispace - formally typeset
Search or ask a question

Showing papers by "University of Texas at Austin published in 2003"


Journal ArticleDOI
02 Jan 2003-Nature
TL;DR: A diagnostic fingerprint of temporal and spatial ‘sign-switching’ responses uniquely predicted by twentieth century climate trends is defined and generates ‘very high confidence’ (as laid down by the IPCC) that climate change is already affecting living systems.
Abstract: Causal attribution of recent biological trends to climate change is complicated because non-climatic influences dominate local, short-term biological changes. Any underlying signal from climate change is likely to be revealed by analyses that seek systematic trends across diverse species and geographic regions; however, debates within the Intergovernmental Panel on Climate Change (IPCC) reveal several definitions of a 'systematic trend'. Here, we explore these differences, apply diverse analyses to more than 1,700 species, and show that recent biological trends match climate change predictions. Global meta-analyses documented significant range shifts averaging 6.1 km per decade towards the poles (or metres per decade upward), and significant mean advancement of spring events by 2.3 days per decade. We define a diagnostic fingerprint of temporal and spatial 'sign-switching' responses uniquely predicted by twentieth century climate trends. Among appropriate long-term/large-scale/multi-species data sets, this diagnostic fingerprint was found for 279 species. This suite of analyses generates 'very high confidence' (as laid down by the IPCC) that climate change is already affecting living systems.

9,761 citations


Journal ArticleDOI
TL;DR: In this paper, a 10-item measure of the Big-Five personality dimensions is proposed for situations where very short measures are needed, personality is not the primary topic of interest, or researchers can tolerate the somewhat diminished psychometric properties associated with very brief measures.

6,574 citations


Journal ArticleDOI
TL;DR: An overview of commercially available model predictive control (MPC) technology, both linear and nonlinear, based primarily on data provided by MPC vendors, is provided in this article, where a brief history of industrial MPC technology is presented first, followed by results of our vendor survey of MPC control and identification technology.

4,819 citations


Journal ArticleDOI
TL;DR: This paper introduces the problem of combining multiple partitionings of a set of objects into a single consolidated clustering without accessing the features or algorithms that determined these partitionings and proposes three effective and efficient techniques for obtaining high-quality combiners (consensus functions).
Abstract: This paper introduces the problem of combining multiple partitionings of a set of objects into a single consolidated clustering without accessing the features or algorithms that determined these partitionings. We first identify several application scenarios for the resultant 'knowledge reuse' framework that we call cluster ensembles. The cluster ensemble problem is then formalized as a combinatorial optimization problem in terms of shared mutual information. In addition to a direct maximization approach, we propose three effective and efficient techniques for obtaining high-quality combiners (consensus functions). The first combiner induces a similarity measure from the partitionings and then reclusters the objects. The second combiner is based on hypergraph partitioning. The third one collapses groups of clusters into meta-clusters which then compete for each object to determine the combined clustering. Due to the low computational costs of our techniques, it is quite feasible to use a supra-consensus function that evaluates all three approaches against the objective function and picks the best solution for a given situation. We evaluate the effectiveness of cluster ensembles in three qualitatively different application scenarios: (i) where the original clusters were formed based on non-identical sets of features, (ii) where the original clustering algorithms worked on non-identical sets of objects, and (iii) where a common data-set is used and the main purpose of combining multiple clusterings is to improve the quality and robustness of the solution. Promising results are obtained in all three situations for synthetic as well as real data-sets.

4,375 citations


Journal ArticleDOI
TL;DR: The Self-Compassion Scale as discussed by the authors is a self-compassion measure that measures the amount of self-love one has towards oneself in instances of pain or failure rather than being harshly self-critical.
Abstract: This article defines the construct of self-compassion and describes the development of the Self-Compassion Scale. Self-compassion entails being kind and understanding toward oneself in instances of pain or failure rather than being harshly self-critical; perceiving one's experiences as part of the larger human experience rather than seeing them as isolating; and holding painful thoughts and feelings in mindful awareness rather than over-identifying with them. Evidence for the validity and reliability of the scale is presented in a series of studies. Results indicate that self-compassion is significantly correlated with positive mental health outcomes such as less depression and anxiety and greater life satisfaction. Evidence is also provided for the discriminant validity of the scale, including with regard to self-esteem measures.

4,176 citations


Journal ArticleDOI
TL;DR: Self-compassion is an emotionally positive self-attitude that should protect against the negative consequences of self-judgment, isolation, and rumination (such as depression), and counter the tendencies towards narcissism, self-centeredness, and downward social comparison that have been associated with attempts to maintain selfesteem as mentioned in this paper.
Abstract: This article defines and examines the construct of self-compassion. Self-compassion entails three main components: (a) self-kindness—being kind and understanding toward oneself in instances of pain or failure rather than being harshly self-critical, (b) common humanity—perceiving one's experiences as part of the larger human experience rather than seeing them as separating and isolating, and (c) mindfulness—holding painful thoughts and feelings in balanced awareness rather than over-identifying with them. Self-compassion is an emotionally positive self-attitude that should protect against the negative consequences of self-judgment, isolation, and rumination (such as depression). Because of its non-evaluative and interconnected nature, it should also counter the tendencies towards narcissism, self-centeredness, and downward social comparison that have been associated with attempts to maintain self-esteem. The relation of self-compassion to other psychological constructs is examined, its links to psychologi...

3,350 citations


Journal ArticleDOI
TL;DR: Design experiments have both a pragmatic bent and a theoretical orientation as mentioned in this paper, developing domain-specific theories by systematically studying those forms of learning and the means of supporting them, and the authors clarify what is involved in preparing for and carrying out a design experiment, and conduct a retrospective analysis of the extensive, longitudinal data sets generated during an experiment.
Abstract: In this article, the authors first indicate the range of purposes and the variety of settings in which design experiments have been conducted and then delineate five crosscutting features that collectively differentiate design experiments from other methodologies. Design experiments have both a pragmatic bent—“engineering” particular forms of learning—and a theoretical orientation—developing domain-specific theories by systematically studying those forms of learning and the means of supporting them. The authors clarify what is involved in preparing for and carrying out a design experiment, and in conducting a retrospective analysis of the extensive, longitudinal data sets generated during an experiment. Logistical issues, issues of measure, the importance of working through the data systematically, and the need to be explicit about the criteria for making inferences are discussed.

3,121 citations


Journal ArticleDOI
TL;DR: The term "vulnerable patient" may be more appropriate and is proposed now for the identification of subjects with high likelihood of developing cardiac events in the near future and a quantitative method for cumulative risk assessment of vulnerable patients needs to be developed.
Abstract: Atherosclerotic cardiovascular disease results in >19 million deaths annually, and coronary heart disease accounts for the majority of this toll. Despite major advances in treatment of coronary heart disease patients, a large number of victims of the disease who are apparently healthy die suddenly without prior symptoms. Available screening and diagnostic methods are insufficient to identify the victims before the event occurs. The recognition of the role of the vulnerable plaque has opened new avenues of opportunity in the field of cardiovascular medicine. This consensus document concludes the following. (1) Rupture-prone plaques are not the only vulnerable plaques. All types of atherosclerotic plaques with high likelihood of thrombotic complications and rapid progression should be considered as vulnerable plaques. We propose a classification for clinical as well as pathological evaluation of vulnerable plaques. (2) Vulnerable plaques are not the only culprit factors for the development of acute coronary syndromes, myocardial infarction, and sudden cardiac death. Vulnerable blood (prone to thrombosis) and vulnerable myocardium (prone to fatal arrhythmia) play an important role in the outcome. Therefore, the term "vulnerable patient" may be more appropriate and is proposed now for the identification of subjects with high likelihood of developing cardiac events in the near future. (3) A quantitative method for cumulative risk assessment of vulnerable patients needs to be developed that may include variables based on plaque, blood, and myocardial vulnerability. In Part I of this consensus document, we cover the new definition of vulnerable plaque and its relationship with vulnerable patients. Part II of this consensus document focuses on vulnerable blood and vulnerable myocardium and provide an outline of overall risk assessment of vulnerable patients. Parts I and II are meant to provide a general consensus and overviews the new field of vulnerable patient. Recently developed assays (eg, C-reactive protein), imaging techniques (eg, CT and MRI), noninvasive electrophysiological tests (for vulnerable myocardium), and emerging catheters (to localize and characterize vulnerable plaque) in combination with future genomic and proteomic techniques will guide us in the search for vulnerable patients. It will also lead to the development and deployment of new therapies and ultimately to reduce the incidence of acute coronary syndromes and sudden cardiac death. We encourage healthcare policy makers to promote translational research for screening and treatment of vulnerable patients.

2,719 citations


Journal ArticleDOI
TL;DR: In this article, a template-less and surfactant-free aqueous method is proposed to generate metal oxide thin films with controlled complexity. But the synthesis involves a templateless and a surfactent-free approach, which enables the generation of, at large-scale, low-cost, and moderate temperatures, advanced metal oxide particle-to-particle thin films.
Abstract: A novel approach to the rational fabrication of smart and functional metal oxide particulate thin films and coatings is demonstrated on the growth of ZnO nanowires and oriented nanorod arrays The synthesis involves a template-less and surfactant-free aqueous method, which enables the generation of, at large-scale, low-cost, and moderate temperatures, advanced metal oxide thin films with controlled complexity The strategy consists of monitoring of the nucleation, growth, and aging processes by means of chemical and electrostatic control of the interfacial free energy It enables the control of the size of nano-, meso-, and microcrystallites, their surface morphology, orientations onto various substrates, and crystal structure

2,619 citations


Book ChapterDOI
01 Jan 2003
TL;DR: The life histories and future trajectories of individuals and groups were largely neglected by early sociological research as discussed by the authors, and the life course perspective is perhaps the pre-eminent theoretical orientation in the study of lives.
Abstract: Today, the life course perspective is perhaps the pre-eminent theoretical orientation in the study of lives, but this has not always been the case. The life histories and future trajectories of individuals and groups were largely neglected by early sociological research. In the pioneering study, The Polish Peasant in Europe and America (1918-1920), W. I. Thomas (with Florian Znaniecki) first made use of such histories and trajectories and argued strongly that they be investigated more fully by sociologists. By the mid-1920s, Thomas was emphasizing the vital need for a “longitudinal approach to life history” using life record data (Volkart, 1951, p. 593). He advocated that studies investigate “many types of individuals with regard to their experiences and various past periods of life in different situations” and follow “groups of individuals into the future, getting a continuous record of experiences as they occur.” Though this advice went unheeded for decades, Thomas’s early recommendations anticipated study of the life course and longitudinal research that has become such a central part of modern sociology and other disciplines.

2,401 citations


Journal ArticleDOI
TL;DR: Findings that point to the psychological value of studying particles-parts of speech that include pronouns, articles, prepositions, conjunctives, and auxiliary verbs are summarized.
Abstract: The words people use in their daily lives can reveal important aspects of their social and psychological worlds. With advances in computer technology, text analysis allows researchers to reliably and quickly assess features of what people say as well as subtleties in their linguistic styles. Following a brief review of several text analysis programs, we summarize some of the evidence that links natural word use to personality, social and situational fluctuations, and psychological interventions. Of particular interest are findings that point to the psychological value of studying particles—parts of speech that include pronouns, articles, prepositions, conjunctives, and auxiliary verbs. Particles, which serve as the glue that holds nouns and regular verbs together, can serve as markers of emotional state, social identity, and cognitive styles.

Journal ArticleDOI
TL;DR: In this paper, the authors developed a switching cost typology that identifies three types of switching costs: (1) procedural switching costs, primarily involving the loss of time and effort; (2) financial switching costs involving loss of financially quantifiable resources; and (3) relational switching cost, involving psychological or emotional discomfort due to the loss or identity and the breaking of bonds, and concluded that consumers' perceptions of product complexity and provider heterogeneity, their breadth of product use, and their alternative provider and switching experience drive the switching costs they perceive.
Abstract: The management of customer switching costs has been hampered by the lack of a comprehensive typology for conceptualizing, categorizing, and measuring consumers' perceptions of these costs. This research develops a switching cost typology that identifies three types of switching costs: (1) procedural switching costs, primarily involving the loss of time and effort; (2) financial switching costs, involving the loss of financially quantifiable resources; and (3) relational switching costs, involving psychological or emotional discomfort due to the loss of identity and the breaking of bonds. The research then examines the antecedents and consequences of switching costs. The results suggest that consumers' perceptions of product complexity and provider heterogeneity, their breadth of product use, and their alternative provider and switching experience drive the switching costs they perceive. Furthermore, all three switching cost types significantly influence consumers' intentions to stay with their current service provider, explaining more variance than does satisfaction.

Journal ArticleDOI
TL;DR: In this paper, a set of high-redshift supernovae were used to confirm previous supernova evidence for an accelerating universe, and the supernova results were combined with independent flat-universe measurements of the mass density from CMB and galaxy redshift distortion data, they provided a measurement of $w=-1.05^{+0.15}-0.09$ if w is assumed to be constant in time.
Abstract: We report measurements of $\Omega_M$, $\Omega_\Lambda$, and w from eleven supernovae at z=0.36-0.86 with high-quality lightcurves measured using WFPC-2 on the HST. This is an independent set of high-redshift supernovae that confirms previous supernova evidence for an accelerating Universe. Combined with earlier Supernova Cosmology Project data, the new supernovae yield a flat-universe measurement of the mass density $\Omega_M=0.25^{+0.07}_{-0.06}$ (statistical) $\pm0.04$ (identified systematics), or equivalently, a cosmological constant of $\Omega_\Lambda=0.75^{+0.06}_{-0.07}$ (statistical) $\pm0.04$ (identified systematics). When the supernova results are combined with independent flat-universe measurements of $\Omega_M$ from CMB and galaxy redshift distortion data, they provide a measurement of $w=-1.05^{+0.15}_{-0.20}$ (statistical) $\pm0.09$ (identified systematic), if w is assumed to be constant in time. The new data offer greatly improved color measurements of the high-redshift supernovae, and hence improved host-galaxy extinction estimates. These extinction measurements show no anomalous negative E(B-V) at high redshift. The precision of the measurements is such that it is possible to perform a host-galaxy extinction correction directly for individual supernovae without any assumptions or priors on the parent E(B-V) distribution. Our cosmological fits using full extinction corrections confirm that dark energy is required with $P(\Omega_\Lambda>0)>0.99$, a result consistent with previous and current supernova analyses which rely upon the identification of a low-extinction subset or prior assumptions concerning the intrinsic extinction distribution.

Journal ArticleDOI
24 Apr 2003-Nature
TL;DR: A high-quality draft sequence of the N. crassa genome is reported, suggesting that RIP has had a profound impact on genome evolution, greatly slowing the creation of new genes through genomic duplication and resulting in a genome with an unusually low proportion of closely related genes.
Abstract: Neurospora crassa is a central organism in the history of twentieth-century genetics, biochemistry and molecular biology. Here, we report a high-quality draft sequence of the N. crassa genome. The approximately 40-megabase genome encodes about 10,000 protein-coding genes—more than twice as many as in the fission yeast Schizosaccharomyces pombe and only about 25% fewer than in the fruitfly Drosophila melanogaster. Analysis of the gene set yields insights into unexpected aspects of Neurospora biology including the identification of genes potentially associated with red light photobiology, genes implicated in secondary metabolism, and important differences in Ca21 signalling as compared with plants and animals. Neurospora possesses the widest array of genome defence mechanisms known for any eukaryotic organism, including a process unique to fungi called repeat-induced point mutation (RIP). Genome analysis suggests that RIP has had a profound impact on genome evolution, greatly slowing the creation of new genes through genomic duplication and resulting in a genome with an unusually low proportion of closely related genes.

Journal ArticleDOI
TL;DR: A quantized maximum signal-to-noise ratio (SNR) beamforming technique is proposed where the receiver only sends the label of the best beamforming vector in a predetermined codebook to the transmitter.
Abstract: Transmit beamforming and receive combining are simple methods for exploiting the significant diversity that is available in multiple-input multiple-output (MIMO) wireless systems. Unfortunately, optimal performance requires either complete channel knowledge or knowledge of the optimal beamforming vector; both are hard to realize. In this article, a quantized maximum signal-to-noise ratio (SNR) beamforming technique is proposed where the receiver only sends the label of the best beamforming vector in a predetermined codebook to the transmitter. By using the distribution of the optimal beamforming vector in independent and identically distributed Rayleigh fading matrix channels, the codebook design problem is solved and related to the problem of Grassmannian line packing. The proposed design criterion is flexible enough to allow for side constraints on the codebook vectors. Bounds on the codebook size are derived to guarantee full diversity order. Results on the density of Grassmannian line packings are derived and used to develop bounds on the codebook size given a capacity or SNR loss. Monte Carlo simulations are presented that compare the probability of error for different quantization strategies.

Journal ArticleDOI
TL;DR: This article found that institutional ownership concentration is positively related to the pay-for-performance sensitivity of executive compensation and negatively related with the level of compensation, even after controlling for firm size, industry, investment opportunities, and performance.
Abstract: We find that institutional ownership concentration is positively related to the pay-for-performance sensitivity of executive compensation and negatively related to the level of compensation, even after controlling for firm size, industry, investment opportunities, and performance. These results suggest that the institutions serve a monitoring role in mitigating the agency problem between shareholders and managers. Additionally, we find that clientele effects exist among institutions for firms with certain compensation structures, suggesting that institutions also influence compensation structures through their preferences

Journal ArticleDOI
TL;DR: It is demonstrated that the reconstruction-based framework provides a convenient way for fault analysis, including fault detectability, reconstructability and identifiability conditions, resolving many theoretical issues in process monitoring.
Abstract: This paper provides an overview and analysis of statistical process monitoring methods for fault detection, identification and reconstruction. Several fault detection indices in the literature are analyzed and unified. Fault reconstruction for both sensor and process faults is presented which extends the traditional missing value replacement method. Fault diagnosis methods that have appeared recently are reviewed. The reconstruction-based approach and the contribution-based approach are analyzed and compared with simulation and industrial examples. The complementary nature of the reconstruction- and contribution-based approaches is highlighted. An industrial example of polyester film process monitoring is given to demonstrate the power of the contribution- and reconstruction-based approaches in a hierarchical monitoring framework. Finally we demonstrate that the reconstruction-based framework provides a convenient way for fault analysis, including fault detectability, reconstructability and identifiability conditions, resolving many theoretical issues in process monitoring. Additional topics are summarized at the end of the paper for future investigation. Copyright © 2003 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, a double-blind, placebo-controlled trial involving pregnant women with a documented history of spontaneous preterm delivery was conducted, where women were enrolled at 19 clinical centers at 16 to 20 weeks of gestation.
Abstract: Background Women who have had a spontaneous preterm delivery are at greatly increased risk for preterm delivery in subsequent pregnancies. The results of several small trials have suggested that 17 alpha-hydroxyprogesterone caproate (17P) may reduce the risk of preterm delivery. Methods We conducted a double-blind, placebo-controlled trial involving pregnant women with a documented history of spontaneous preterm delivery. Women were enrolled at 19 clinical centers at 16 to 20 weeks of gestation and randomly assigned by a central data center, in a 2:1 ratio, to receive either weekly injections of 250 mg of 17P or weekly injections of an inert oil placebo; injections were continued until delivery or to 36 weeks of gestation. The primary outcome was preterm delivery before 37 weeks of gestation. Analysis was performed according to the intention-to-treat principle. Results Base-line characteristics of the 310 women in the progesterone group and the 153 women in the placebo group were similar. Treatment with 1...

Journal ArticleDOI
TL;DR: The nervous system physiology, the factors that are critical for nerve repair, and the current approaches that are being explored to aid peripheral nerve regeneration and spinal cord repair are reviewed.
Abstract: Nerve regeneration is a complex biological phenomenon. In the peripheral nervous system, nerves can regenerate on their own if injuries are small. Larger injuries must be surgically treated, typically with nerve grafts harvested from elsewhere in the body. Spinal cord injury is more complicated, as there are factors in the body that inhibit repair. Unfortunately, a solution to completely repair spinal cord injury has not been found. Thus, bioengineering strategies for the peripheral nervous system are focused on alternatives to the nerve graft, whereas efforts for spinal cord injury are focused on creating a permissive environment for regeneration. Fortunately, recent advances in neuroscience, cell culture, genetic techniques, and biomaterials provide optimism for new treatments for nerve injuries. This article reviews the nervous system physiology, the factors that are critical for nerve repair, and the current approaches that are being explored to aid peripheral nerve regeneration and spinal cord repair.

Journal ArticleDOI
TL;DR: The current project investigated the features of linguistic style that distinguish between true and false stories, and found that liars showed lower cognitive complexity, used fewer self-references and other- References, and used more negative emotion words than truth-tellers.
Abstract: Telling lies often requires creating a story about an experience or attitude that does not exist. As a result, false stories may be qualitatively different from true stories. The current project investigated the features of linguistic style that distinguish between true and false stories. In an analysis of five independent samples, a computer-based text analysis program correctly classified liars and truth-tellers at a rate of 67% when the topic was constant and a rate of 61% overall. Compared to truth-tellers, liars showed lower cognitive complexity, used fewer self-references and other-references, and used more negative emotion words.

Proceedings Article
01 Dec 2003
TL;DR: This paper proposes a multi-scale structural similarity method, which supplies more flexibility than previous single-scale methods in incorporating the variations of viewing conditions, and develops an image synthesis method to calibrate the parameters that define the relative importance of different scales.
Abstract: The structural similarity image quality paradigm is based on the assumption that the human visual system is highly adapted for extracting structural information from the scene, and therefore a measure of structural similarity can provide a good approximation to perceived image quality. This paper proposes a multi-scale structural similarity method, which supplies more flexibility than previous single-scale methods in incorporating the variations of viewing conditions. We develop an image synthesis method to calibrate the parameters that define the relative importance of different scales. Experimental comparisons demonstrate the effectiveness of the proposed method.

Proceedings ArticleDOI
24 Aug 2003
TL;DR: This work presents an innovative co-clustering algorithm that monotonically increases the preserved mutual information by intertwining both the row and column clusterings at all stages and demonstrates that the algorithm works well in practice, especially in the presence of sparsity and high-dimensionality.
Abstract: Two-dimensional contingency or co-occurrence tables arise frequently in important applications such as text, web-log and market-basket data analysis. A basic problem in contingency table analysis is co-clustering: simultaneous clustering of the rows and columns. A novel theoretical formulation views the contingency table as an empirical joint probability distribution of two discrete random variables and poses the co-clustering problem as an optimization problem in information theory---the optimal co-clustering maximizes the mutual information between the clustered random variables subject to constraints on the number of row and column clusters. We present an innovative co-clustering algorithm that monotonically increases the preserved mutual information by intertwining both the row and column clusterings at all stages. Using the practical example of simultaneous word-document clustering, we demonstrate that our algorithm works well in practice, especially in the presence of sparsity and high-dimensionality.

Journal ArticleDOI
01 Jul 2003-Immunity
TL;DR: A TNF/iNOS-producing (Tip)-DC subset in spleens of Listeria monocytogenes-infected mice that is absent from CCR2-deficient mice is identified, indicating that Tip-DCs are not essential for T cell priming.

Journal ArticleDOI
TL;DR: The Common Land Model (CLM) as mentioned in this paper was developed for community use by a grassroots collaboration of scientists who have an interest in making a general land model available for public use and further development.
Abstract: The Common Land Model (CLM) was developed for community use by a grassroots collaboration of scientists who have an interest in making a general land model available for public use and further development. The major model characteristics include enough unevenly spaced layers to adequately represent soil temperature and soil moisture, and a multilayer parameterization of snow processes; an explicit treatment of the mass of liquid water and ice water and their phase change within the snow and soil system; a runoff parameterization following the TOPMODEL concept; a canopy photo synthesis-conductance model that describes the simultaneous transfer of CO2 and water vapor into and out of vegetation; and a tiled treatment of the subgrid fraction of energy and water balance. CLM has been extensively evaluated in offline mode and coupling runs with the NCAR Community Climate Model (CCM3). The results of two offline runs, presented as examples, are compared with observations and with the simulation of three other la...

Journal ArticleDOI
TL;DR: The application of Grassmannian frames to wireless communication and to multiple description coding is discussed and their connection to unit norm tight frames for frames which are generated by group-like unitary systems is discussed.

Proceedings ArticleDOI
24 Aug 2003
TL;DR: This paper proposes to employ learnable text distance functions for each database field, and shows that such measures are capable of adapting to the specific notion of similarity that is appropriate for the field's domain.
Abstract: The problem of identifying approximately duplicate records in databases is an essential step for data cleaning and data integration processes. Most existing approaches have relied on generic or manually tuned distance metrics for estimating the similarity of potential duplicates. In this paper, we present a framework for improving duplicate detection using trainable measures of textual similarity. We propose to employ learnable text distance functions for each database field, and show that such measures are capable of adapting to the specific notion of similarity that is appropriate for the field's domain. We present two learnable text similarity measures suitable for this task: an extended variant of learnable string edit distance, and a novel vector-space based measure that employs a Support Vector Machine (SVM) for training. Experimental results on a range of datasets show that our framework can improve duplicate detection accuracy over traditional techniques.

Journal ArticleDOI
TL;DR: Results from the present study demonstrate that the scale is internally consistent, related to alternative measures and hypothesized causes and effects, and unrelated to theoretically distinct constructs, providing evidence of convergent, criterion-related, and discriminant validity.
Abstract: This study describes the development and validity testing of a field measure of transactive memory systems. Transactive memory systems are especially important for teams designed to leverage members' expertise, but field research has lagged because there are no adequate measures of the construct. The author developed and tested a 15-item scale in a laboratory sample of 124 teams, a field sample of 64 Master of Business Administration consulting teams, and a field sample of 27 teams from technology companies. Results from the present study demonstrate that the scale is internally consistent, related to alternative measures and hypothesized causes and effects, and unrelated to theoretically distinct constructs, providing evidence of convergent, criterion-related, and discriminant validity. Suggestions for improving the scale, future validity testing, and possible boundary conditions are discussed.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate whether institutional investors vote with their feet when dissatisfied with a firm's management by examining changes in equity ownership around forced CEO turnover and find that aggregate institutional ownership and the number of institutional investors decline in the year prior to forcing CEO turnover.

Journal ArticleDOI
TL;DR: Though it is clear that reducing exposure to media violence will reduce aggression and violence, it is less clear what sorts of interventions will produce a reduction in exposure, and large-scale longitudinal studies would help specify the magnitude of media-violence effects on the most severe types of violence.
Abstract: Research on violent television and films, video games, and music reveals unequivocal evidence that media violence increases the likelihood of aggressive and violent behavior in both immediate and long-term contexts. The effects appear larger for milder than for more severe forms of aggression, but the effects on severe forms of violence are also substantial (r = .13 to .32) when compared with effects of other violence risk factors or medical effects deemed important by the medical community (e.g., effect of aspirin on heart attacks). The research base is large; diverse in methods, samples, and media genres; and consistent in overall findings. The evidence is clearest within the most extensively researched domain, television and film violence. The growing body of video-game research yields essentially the same conclusions. Short-term exposure increases the likelihood of physically and verbally aggressive behavior, aggressive thoughts, and aggressive emotions. Recent large-scale longitudinal studies provide converging evidence linking frequent exposure to violent media in childhood with aggression later in life, including physical assaults and spouse abuse. Because extremely violent criminal behaviors (e.g., forcible rape, aggravated assault, homicide) are rare, new longitudinal studies with larger samples are needed to estimate accurately how much habitual childhood exposure to media violence increases the risk for extreme violence. Well-supported theory delineates why and when exposure to media violence increases aggression and violence. Media violence produces short-term increases by priming existing aggressive scripts and cognitions, increasing physiological arousal, and triggering an automatic tendency to imitate observed behaviors. Media violence produces long-term effects via several types of learning processes leading to the acquisition of lasting (and automatically accessible) aggressive scripts, interpretational schemas, and aggression-supporting beliefs about social behavior, and by reducing individuals' normal negative emotional responses to violence (i.e., desensitization). Certain characteristics of viewers (e.g., identification with aggressive characters), social environments (e.g., parental influences), and media content (e.g., attractiveness of the perpetrator) can influence the degree to which media violence affects aggression, but there are some inconsistencies in research results. This research also suggests some avenues for preventive intervention (e.g., parental supervision, interpretation, and control of children's media use). However, extant research on moderators suggests that no one is wholly immune to the effects of media violence. Recent surveys reveal an extensive presence of violence in modern media. Furthermore, many children and youth spend an inordinate amount of time consuming violent media. Although it is clear that reducing exposure to media violence will reduce aggression and violence, it is less clear what sorts of interventions will produce a reduction in exposure. The sparse research literature suggests that counterattitudinal and parental-mediation interventions are likely to yield beneficial effects, but that media literacy interventions by themselves are unsuccessful. Though the scientific debate over whether media violence increases aggression and violence is essentially over, several critical tasks remain. Additional laboratory and field studies are needed for a better understanding of underlying psychological processes, which eventually should lead to more effective interventions. Large-scale longitudinal studies would help specify the magnitude of media-violence effects on the most severe types of violence. Meeting the larger societal challenge of providing children and youth with a much healthier media diet may prove to be more difficult and costly, especially if the scientific, news, public policy, and entertainment communities fail to educate the general public about the real risks of media-violence exposure to children and youth.

Journal ArticleDOI
01 Aug 2003-Polymer
TL;DR: In this article, the effect of incomplete exfoliation of simple stacks of layered aluminosilicates on nanocomposite modulus was examined using the composite theories of Halpin-Tsai and Mori-Tanaka.