scispace - formally typeset
Search or ask a question

Showing papers by "University of Houston published in 2009"


Journal ArticleDOI
TL;DR: A general method for directing-group-containing arene arylation by aryl iodides is developed and palladium acetate as the catalyst, which arylated anilides, benzamides, benzoic acids, benzylamines, and 2-substituted pyridine derivatives under nearly identical conditions.
Abstract: The transition-metal-catalyzed functionalization of C-H bonds is a powerful method for generating carbon-carbon bonds. Although significant advances to this field have been reported during the past decade, many challenges remain. First, most of the methods are substrate-specific and thus cannot be generalized. Second, conversions of unactivated (i.e., not benzylic or alpha to heteroatom) sp(3) C-H bonds to C-C bonds are rare, with most examples limited to t-butyl groups, a conversion that is inherently simple because there are no beta-hydrogens that can be eliminated. Finally, the palladium, rhodium, and ruthenium catalysts routinely used for the conversion of C-H bonds to C-C bonds are expensive. Catalytically active metals that are cheaper and less exotic (e.g., copper, iron, and manganese) are rarely used. This Account describes our attempts to provide solutions to these three problems. We have developed a general method for directing-group-containing arene arylation by aryl iodides. Using palladium acetate as the catalyst, we arylated anilides, benzamides, benzoic acids, benzylamines, and 2-substituted pyridine derivatives under nearly identical conditions. We have also developed a method for the palladium-catalyzed auxiliary-assisted arylation of unactivated sp(3) C-H bonds. This procedure allows for the beta-arylation of carboxylic acid derivatives and the gamma-arylation of amine derivatives. Furthermore, copper catalysis can be used to mediate the arylation of acidic arene C-H bonds (i.e., those with pK(a) values <35 in DMSO). Using a copper iodide catalyst in combination with a base and a phenanthroline ligand, we successfully arylated electron-rich and electron-deficient heterocycles and electron-poor arenes possessing at least two electron-withdrawing groups. The reaction exhibits unusual regioselectivity: arylation occurs at the most hindered position. This copper-catalyzed method supplements the well-known C-H activation/borylation methodology, in which functionalization usually occurs at the least hindered position. We also describe preliminary investigations to determine the mechanisms of these transformations. We anticipate that other transition metals, including iron, nickel, cobalt, and silver, will also be able to facilitate deprotonation/arylation reaction sequences.

1,747 citations


Journal ArticleDOI
TL;DR: The miRecords as mentioned in this paper database contains 1135 records of validated miRNA-target interactions between 301 miRNAs and 902 target genes in seven animal species in seven species.
Abstract: MicroRNAs (miRNAs) are an important class of small noncoding RNAs capable of regulating other genes’ expression. Much progress has been made in computational target prediction of miRNAs in recent years. More than 10 miRNA target prediction programs have been established, yet, the prediction of animal miRNA targets remains a challenging task. We have developed miRecords, an integrated resource for animal miRNA–target interactions. The Validated Targets component of this resource hosts a large, high-quality manually curated database of experimentally validated miRNA–target interactions with systematic documentation of experimental support for each interaction. The current release of this database includes 1135 records of validated miRNA–target interactions between 301 miRNAs and 902 target genes in seven animal species. The Predicted Targets component of miRecords stores predicted miRNA targets produced by 11 established miRNA target prediction programs. miRecords is expected to serve as a useful resource not only for experimental miRNA researchers, but also for informatics scientists developing the next-generation miRNA target prediction programs. The miRecords is available at http:// miRecords.umn.edu/miRecords.

1,369 citations


Journal ArticleDOI
TL;DR: This review will describe the molecular mechanisms for permeation of antibiotics through the outer membrane, and the strategies that bacteria have deployed to resist antibiotics by modifications of these pathways.

1,297 citations


Journal ArticleDOI
TL;DR: In this paper, a model-independent framework of genetic units and bounding surfaces for sequence stratigraphy has been proposed, based on the interplay of accommodation and sedimentation (i.e., forced regressive, lowstand and highstand normal regressive), which are bounded by sequence stratigraphic surfaces.

1,255 citations


Journal ArticleDOI
K. Aamodt1, N. Abel2, A. Abrahantes Quintana, A. Acero  +989 moreInstitutions (76)
TL;DR: In this paper, the production of mesons containing strange quarks (KS, φ) and both singly and doubly strange baryons (,, and − + +) are measured at mid-rapidity in pp collisions at √ s = 0.9 TeV with the ALICE experiment at the LHC.

1,176 citations


Journal ArticleDOI
24 Apr 2009-Science
TL;DR: To understand the biology and evolution of ruminants, the cattle genome was sequenced to about sevenfold coverage and provides a resource for understanding mammalian evolution and accelerating livestock genetic improvement for milk and meat production.
Abstract: To understand the biology and evolution of ruminants, the cattle genome was sequenced to about sevenfold coverage. The cattle genome contains a minimum of 22,000 genes, with a core set of 14,345 orthologs shared among seven mammalian species of which 1217 are absent or undetected in noneutherian (marsupial or monotreme) genomes. Cattle-specific evolutionary breakpoint regions in chromosomes have a higher density of segmental duplications, enrichment of repetitive elements, and species-specific variations in genes associated with lactation and immune responsiveness. Genes involved in metabolism are generally highly conserved, although five metabolic genes are deleted or extensively diverged from their human orthologs. The cattle genome sequence thus provides a resource for understanding mammalian evolution and accelerating livestock genetic improvement for milk and meat production.

1,144 citations


Journal ArticleDOI
TL;DR: In this article, the authors provided a comprehensive overview of coalitional game theory and its usage in wireless and communication networks, and provided an in-depth analysis of the methodologies and approaches for using these games in both game theoretic and communication applications.
Abstract: In this tutorial, we provided a comprehensive overview of coalitional game theory, and its usage in wireless and communication networks. For this purpose, we introduced a novel classification of coalitional games by grouping the sparse literature into three distinct classes of games: canonical coalitional games, coalition formation games, and coalitional graph games. For each class, we explained in details the fundamental properties, discussed the main solution concepts, and provided an in-depth analysis of the methodologies and approaches for using these games in both game theory and communication applications. The presented applications have been carefully selected from a broad range of areas spanning a diverse number of research problems. The tutorial also sheds light on future opportunities for using the strong analytical tool of coalitional games in a number of applications. In a nutshell, this article fills a void in existing communications literature, by providing a novel tutorial on applying coalitional game theory in communication networks through comprehensive theory and technical details as well as through practical examples drawn from both game theory and communication application.

892 citations


Journal ArticleDOI
TL;DR: Catuneanu et al. as discussed by the authors used a neutral approach that focused on model-independent, fundamental concepts, because these are the ones common to various approaches and this search for common ground is what they meant by "standardization", not the imposition of a strict, inflexible set of rules for the placement of sequence-stratigraphicsurfaces.

872 citations


Journal ArticleDOI
TL;DR: It is proposed that it is misguided and generally unjustified to attempt to control for IQ differences by matching procedures or, more commonly, by using IQ scores as covariates.
Abstract: IQ scores are volatile indices of global functional outcome, the final common path of an individual's genes, biology, cognition, education, and experiences. In studying neurocognitive outcomes in children with neurodevelopmental disorders, it is commonly assumed that IQ can and should be partialed out of statistical relations or used as a covariate for specific measures of cognitive outcome. We propose that it is misguided and generally unjustified to attempt to control for IQ differences by matching procedures or, more commonly, by using IQ scores as covariates. We offer logical, statistical, and methodological arguments, with examples from three neurodevelopmental disorders (spina bifida meningomyelocele, learning disabilities, and attention deficit hyperactivity disorder) that: (1) a historical reification of general intelligence, g, as a causal construct that measures aptitude and potential rather than achievement and performance has fostered the idea that IQ has special status and that in studying neurocognitive function in neurodevelopmental disorders; (2) IQ does not meet the requirements for a covariate; and (3) using IQ as a matching variable or covariate has produced overcorrected, anomalous, and counterintuitive findings about neurocognitive function.

809 citations


Journal ArticleDOI
TL;DR: In this article, the authors explore the role of strategic leaders in managing exploration and exploitation, and reveal how the impact of leadership is contingent upon dynamic environmental conditions and argue that environmental dynamism needs to be taken into account to fully understand the effectiveness of leaders.
Abstract: This study advances prior theoretical research by linking transformational and transactional behaviors of strategic leaders to two critical outputs of organizational learning: exploratory and exploitative innovation. Findings indicate that transformational leadership behaviors contribute significantly to adopting generative thinking and pursuing exploratory innovation. Transactional leadership behaviors, on the other hand, facilitate improving and extending existing knowledge and are associated with exploitative innovation. In addition, we argue that environmental dynamism needs to be taken into account to fully understand the effectiveness of strategic leaders. Our study provides new insights that misfits rather than fits between leadership behaviors and innovative outcomes matter in dynamic environments. Hence, we contribute to the debate on the role of strategic leaders in managing exploration and exploitation, not only by examining how specific leadership behaviors impact innovative outcomes, but also by revealing how the impact of leadership is contingent upon dynamic environmental conditions.

742 citations


Journal ArticleDOI
TL;DR: In this article, the authors used field and laboratory measurements, geographic information systems, and simulation modeling to investigate the potential effects of accelerated sea-level rise on tidal marsh area and delivery of ecosystem ser- vices along the Georgia coast.
Abstract: We used field and laboratory measurements, geographic information systems, and simulation modeling to investigate the potential effects of accelerated sea-level rise on tidal marsh area and delivery of ecosystem ser- vices along the Georgia coast. Model simulations using the Intergovernmental Panel on Climate Change (IPCC) mean and maximum estimates of sea-level rise for the year 2100 suggest that salt marshes will decline in area by 20% and 45%, respectively. The area of tidal freshwater marshes will increase by 2% under the IPCC mean scenario, but will decline by 39% under the maximum scenario. Delivery of ecosystem services associated with productivity (macrophyte biomass) and waste treatment (nitrogen accumulation in soil, potential denitrification) will also decline. Our findings suggest that tidal marshes at the lower and upper salinity ranges, and their attendant delivery of ecosystem services, will be most affected by accelerated sea- level rise, unless geomorphic conditions (ie gradual increase in elevation) enable tidal freshwater marshes to migrate inland, or vertical accretion of salt marshes to increase, to compensate for accelerated sea-level rise.

Journal ArticleDOI
TL;DR: In this paper, the authors suggest that inequality in the distribution of landownership adversely affected the emergence of human-capital promoting institutions (e.g. public schooling), and thus the pace and the nature of the transition from an agricultural to an industrial economy, contributing to the divergence in income per capita across countries.
Abstract: This paper suggests that inequality in the distribution of landownership adversely affected the emergence of human-capital promoting institutions (e.g. public schooling), and thus the pace and the nature of the transition from an agricultural to an industrial economy, contributing to the emergence of the great divergence in income per capita across countries. The prediction of the theory regarding the adverse effect of the concentration of landownership on education expenditure is established empirically based on evidence from the beginning of the 20th century in the U.S.

Book
18 Jun 2009
TL;DR: Dynamic spectrum access and management in cognitive radio networks provides an all-inclusive introduction to this emerging technology, outlining the fundamentals of cognitive radio-based wireless communication and networking, spectrum sharing models, and the requirements for dynamic spectrum access as mentioned in this paper.
Abstract: Are you involved in designing the next generation of wireless networks? With spectrum becoming an ever scarcer resource, it is critical that new systems utilize all available frequency bands as efficiently as possible. The revolutionary technology presented in this book will be at the cutting edge of future wireless communications. Dynamic Spectrum Access and Management in Cognitive Radio Networks provides you with an all-inclusive introduction to this emerging technology, outlining the fundamentals of cognitive radio-based wireless communication and networking, spectrum sharing models, and the requirements for dynamic spectrum access. In addition to the different techniques and their applications in designing dynamic spectrum access methods, you'll also find state-of-the-art dynamic spectrum access schemes, including classifications of the different schemes and the technical details of each scheme. This is a perfect introduction for graduate students and researchers, as well as a useful self-study guide for practitioners.

Journal ArticleDOI
TL;DR: In contrast to the planned role of dynamic and operational capabilities and the ambidexterity that they jointly offer, improvisational capabilities are proposed to operate distinctly as a “third hand” that facilitates reconfiguration and change in highly turbulent environments.
Abstract: Organizations are increasingly engaged in competitive dynamics that are enabled or induced by IT. A key competitive dynamics question for many organizations is how to build a competitive advantage in turbulence with digital IT systems. While the literature has focused mostly on developing and exercising dynamic capabilities for planned reconfiguration of existing operational capabilities in fairly stable environments with patterned “waves,” this may not always be possible, or even appropriate, in highly turbulent environments with unexpected “storms.” We introduce improvisational capabilities as an alternative means for managing highly turbulent environments, defined as the ability to spontaneously reconfigure existing resources to build new operational capabilities to address urgent, unpredictable, and novel environmental situations. In contrast to the planned role of dynamic and operational capabilities and the ambidexterity that they jointly offer, improvisational capabilities are proposed to operate distinctly as a “third hand” that facilitates reconfiguration and change in highly turbulent environments. First, the paper develops the notion of improvisational capabilities and articulates the key differences between the two “reconfiguration” - improvisational and dynamic - capabilities. Second, the paper compares the relative effects of improvisational and dynamic capabilities in the context of New Product Development (NPD) in different levels of environmental turbulence. Third, the paper shows how IT leveraging capability in NPD is decomposed into its three digital IT systems: Project and Resource Management Systems (PRMS), Organizational Memory Systems (OMS), and Cooperative Work Systems (CWS) - and how each of these three IT systems enhances improvisational capabilities, an effect that is accentuated in highly turbulent environments.The results show that while dynamic capabilities are the primary predictor of competitive advantage in moderately turbulent environments, improvisational capabilities fully dominate in highly turbulent environments. Besides discriminant validity, the distinction between improvisational and dynamic capabilities is evidenced by the differential effects of IT leveraging capability on improvisational and dynamic capabilities. The results show that the more the IT leveraging capability is catered toward managing resources (through PRMS) and team collaboration (through CWS) rather than relying on past knowledge and procedures (through OMS), the more it is positively associated with improvisational capabilities, particularly in more turbulent environments. The paper draws implications for how different IT systems can influence improvisational capabilities and competitive advantage in turbulent environments, thereby enhancing our understanding of the role of IT systems on reconfiguration capabilities. The paper discusses the theoretical and practical implications of building and exercising the “third hand” of improvisational capabilities for IT-enabled competitive dynamics in turbulence.

Journal ArticleDOI
TL;DR: RTI processes potentially integrate general and special education and suggest new directions for research and public policy related to LDs, but the scaling issues in schools are significant and more research is needed on the use of RTI data for identification.
Abstract: We address the advantages and challenges of service delivery models based on student response to intervention (RTI) for preventing and remediating academic difficulties and as data sources for identification for special education services The primary goal of RTI models is improved academic and behavioral outcomes for all students We review evidence for the processes underlying RTI, including screening and progress monitoring assessments, evidence-based interventions, and schoolwide coordination of multitiered instruction We also discuss the secondary goal of RTI, which is to provide data for identification of learning disabilities (LDs) Incorporating instructional response into identification represents a controversial shift away from discrepancies in cognitive skills that have traditionally been a primary basis for LD identification RTI processes potentially integrate general and special education and suggest new directions for research and public policy related to LDs, but the scaling issues in schools are significant and more research is needed on the use of RTI data for identification

Journal ArticleDOI
TL;DR: The empirical results indicate that the model improves on the benchmark Heston model by 24% in-sample and 23% out-of-sample, and better fit results from improvements in the modeling of the term structure dimension as well as the moneyness dimension.
Abstract: State-of-the-art stochastic volatility models generate a "volatility smirk" that explains why out-of-the-money index puts have high prices relative to the Black-Scholes benchmark. These models also adequately explain how the volatility smirk moves up and down in response to changes in risk. However, the data indicate that the slope and the level of the smirk fluctuate largely independently. While single-factor stochastic volatility models can capture the slope of the smile, they cannot explain such largely independent fluctuations in its level and slope over time. We propose to model these movements using a two-factor stochastic volatility model. Because the factors have distinct correlations with market returns, and because the weights of the factors vary over time, the model generates stochastic correlation between volatility and stock returns. Besides providing more flexible modeling of the time variation in the smirk, the model also provides more flexible modeling of the volatility term structure. Our empirical results indicate that the model improves on the benchmark Heston model by 24% in-sample and 23% out-of-sample. The better fit results from improvements in the modeling of the term structure dimension as well as the moneyness dimension.

Journal ArticleDOI
02 Apr 2009-Nature
TL;DR: It is suggested that long-term anthropogenic CO2 storage models in similar geological systems should focus on the potential mobility of CO2 dissolved in water, following the findings that geological mineral fixation is a minor CO2 trapping mechanism in natural gas fields.
Abstract: Injecting CO(2) into deep geological strata is proposed as a safe and economically favourable means of storing CO(2) captured from industrial point sources. It is difficult, however, to assess the long-term consequences of CO(2) flooding in the subsurface from decadal observations of existing disposal sites. Both the site design and long-term safety modelling critically depend on how and where CO(2) will be stored in the site over its lifetime. Within a geological storage site, the injected CO(2) can dissolve in solution or precipitate as carbonate minerals. Here we identify and quantify the principal mechanism of CO(2) fluid phase removal in nine natural gas fields in North America, China and Europe, using noble gas and carbon isotope tracers. The natural gas fields investigated in our study are dominated by a CO(2) phase and provide a natural analogue for assessing the geological storage of anthropogenic CO(2) over millennial timescales. We find that in seven gas fields with siliciclastic or carbonate-dominated reservoir lithologies, dissolution in formation water at a pH of 5-5.8 is the sole major sink for CO(2). In two fields with siliciclastic reservoir lithologies, some CO(2) loss through precipitation as carbonate minerals cannot be ruled out, but can account for a maximum of 18 per cent of the loss of emplaced CO(2). In view of our findings that geological mineral fixation is a minor CO(2) trapping mechanism in natural gas fields, we suggest that long-term anthropogenic CO(2) storage models in similar geological systems should focus on the potential mobility of CO(2) dissolved in water.

Journal ArticleDOI
TL;DR: This paper proposes a distributed game-theoretical framework over multiuser cooperative communication networks to achieve optimal relay selection and power allocation without knowledge of CSI.
Abstract: The performance in cooperative communication depends on careful resource allocation such as relay selection and power control, but the traditional centralized resource allocation requires precise measurements of channel state information (CSI). In this paper, we propose a distributed game-theoretical framework over multiuser cooperative communication networks to achieve optimal relay selection and power allocation without knowledge of CSI. A two-level Stackelberg game is employed to jointly consider the benefits of the source node and the relay nodes in which the source node is modeled as a buyer and the relay nodes are modeled as sellers, respectively. The proposed approach not only helps the source find the relays at relatively better locations and "buyrdquo an optimal amount of power from the relays, but also helps the competing relays maximize their own utilities by asking the optimal prices. The game is proved to converge to a unique optimal equilibrium. Moreover, the proposed resource allocation scheme with the distributed game can achieve comparable performance to that employing centralized schemes.

Journal ArticleDOI
TL;DR: The results supported the hypotheses, indicating that women were described as more communal and less agentic than men and that communal characteristics have a negative relationship with hiring decisions in academia that are based on letters of recommendation.
Abstract: In 2 studies that draw from the social role theory of sex differences (A. H. Eagly, W. Wood, & A. B. Diekman, 2000), the authors investigated differences in agentic and communal characteristics in letters of recommendation for men and women for academic positions and whether such differences influenced selection decisions in academia. The results supported the hypotheses, indicating (a) that women were described as more communal and less agentic than men (Study 1) and (b) that communal characteristics have a negative relationship with hiring decisions in academia that are based on letters of recommendation (Study 2). Such results are particularly important because letters of recommendation continue to be heavily weighted and commonly used selection tools (R. D. Arvey & T. E. Campion, 1982; R. M. Guion, 1998), particularly in academia (E. P. Sheehan, T. M. McDevitt, & H. C. Ross, 1998).

Journal ArticleDOI
TL;DR: In this article, the authors analyzed the Cenomanian Dunvegan Formation and the Turonian Ferron Sandstone Member of the Mancos Shale Formation, in Utah, and showed that they fall within the predicted limits of rivers that are capable of generating hyperpycnal plumes.
Abstract: Despite the historical assumption that the bulk of marine “shelf” mud is deposited by gradual fallout from suspension in quiet water, recent studies of modern muddy shelves and their associated rivers show that they are dominated by hyperpycnal fluid mud. This has not been widely applied to the interpretation of ancient sedimentary fluvio-deltaic systems, such as dominate the mud-rich Cretaceous Western Interior Seaway of North America. We analyze two such systems, the Turonian Ferron Sandstone Member of the Mancos Shale Formation, in Utah, and the Cenomanian Dunvegan Formation in Alberta. Paleodischarge estimates of trunk rivers show that they fall within the predicted limits of rivers that are capable of generating hyperpycnal plumes. The associated prodeltaic mudstones match modern hyperpycnite facies models, and suggest a correspondingly hyperpycnal character. Physical sedimentary structures include diffusely stratified beds that show both normal and inverse grading, indicating sustained flows that waxed and waned. They also display low intensities of bioturbation, which reflect the high physical and chemical stresses of hyperpycnal environments. Distinct “mantle and swirl” biogenic structures indicate soupground conditions, typical of the fluid muds that represent the earliest stages of deposition in a hyperpycnal plume. Hyperpycnal conditions are ameliorated by the fact that these rivers were relatively small, dirty systems that drained an active orogenic belt during humid temperate (Dunvegan Formation) to subtropical (Ferron Sandstone Member) “greenhouse” conditions. During sustained periods of flooding, such as during monsoons, the initial river flood may lower salinities within the inshore area, effectively “prepping” the area and allowing subsequent floods to become hyperpycnal much more easily. Although shelf slopes were too low to allow long-run-out hyperpycnal flows, the storm-dominated nature of the seaway likely allowed fluid mud to be transported for significant distances across and along the paleo-shelf. Rapidly deposited prodeltaic hyperpycnites are thus considered to form a significant component of the muddy shelf successions that comprise the thick shale formations of the Cretaceous Western Interior Seaway.

Journal ArticleDOI
TL;DR: The proposed game-theoretic framework for modeling the interactions among multiple primary users (or service providers) and multiple secondary users is used to investigate network dynamics under different system parameter settings and under system perturbation.
Abstract: We consider the problem of spectrum trading with multiple licensed users (i.e., primary users) selling spectrum opportunities to multiple unlicensed users (i.e., secondary users). The secondary users can adapt the spectrum buying behavior (i.e., evolve) by observing the variations in price and quality of spectrum offered by the different primary users or primary service providers. The primary users or primary service providers can adjust their behavior in selling the spectrum opportunities to secondary users to achieve the highest utility. In this paper, we model the evolution and the dynamic behavior of secondary users using the theory of evolutionary game. An algorithm for the implementation of the evolution process of a secondary user is also presented. To model the competition among the primary users, a noncooperative game is formulated where the Nash equilibrium is considered as the solution (in terms of size of offered spectrum to the secondary users and spectrum price). For a primary user, an iterative algorithm for strategy adaptation to achieve the solution is presented. The proposed game-theoretic framework for modeling the interactions among multiple primary users (or service providers) and multiple secondary users is used to investigate network dynamics under different system parameter settings and under system perturbation.

Journal ArticleDOI
TL;DR: In this article, the authors propose that internal marketing is fundamentally a process in which leaders instill into followers a sense of oneness with the organization, formally known as "organizational identification" (OI).
Abstract: There is little empirical research on internal marketing despite its intuitive appeal and anecdotal accounts of its benefits. Adopting a social identity theory perspective, the authors propose that internal marketing is fundamentally a process in which leaders instill into followers a sense of oneness with the organization, formally known as “organizational identification” (OI). The authors test the OI-transfer research model in two multinational studies using multilevel and multisource data. Hierarchical linear modeling analyses show that the OI-transfer process takes place in the relationships between business unit managers and salespeople and between regional directors and business unit managers. Furthermore, both leader–follower dyadic tenure and charismatic leadership moderate this cascading effect. Leaders with a mismatch between their charisma and OI ultimately impair followers' OI. In turn, customer-contact employees' OI strongly predicts their sales performance. Finally, both employees' ...

Journal ArticleDOI
TL;DR: In this article, the authors used an information asymmetry index based on measures of adverse selection developed by the market microstructure literature to test if information asymmetric is the sole determinant of capital structure decisions as suggested by the pecking order theory.
Abstract: Using an information asymmetry index based on measures of adverse selection developed by the market microstructure literature, we test if information asymmetry is the sole determinant of capital structure decisions as suggested by the pecking order theory. Our tests rely exclusively on measures of the market's assessment of adverse selection risk rather than on ex-ante firm characteristics. We find that information asymmetry does affect capital structure decisions of U.S. firms over the period 1973-2002, especially when firms' financing needs are low and when firms are financially constrained. We also find a significant degree of intertemporal variability in firms' degree of information asymmetry, as well as in its impact on firms' debt issuance decisions. Our findings based on the information asymmetry index are robust to sorting firms based on size and firm insiders' trading activity, two popular alternative proxies for the severity of adverse selection. Overall, this evidence explains why the pecking order theory is only partially successful in explaining all of firms' capital structure decisions. It also suggests that the theory finds support when its basic assumptions hold in the data, as it should reasonably be expected of any theory.

Journal ArticleDOI
TL;DR: In intact eyes, lens-induced relative peripheral hyperopia produced central axial myopia and eliminating the fovea by laser photoablation did not prevent compensating myopic changes in response to optically imposed hyperopia.

Posted Content
TL;DR: This article found evidence of short term predictability for 11 out of 12 currencies vis-a-vis the U.S. dollar over the post-Bretton Woods float, with the strongest evidence coming from specifications that incorporate heterogeneous coefficients and interest rate smoothing.
Abstract: An extensive literature that studied the performance of empirical exchange rate models following Meese and Rogoff's (1983a) seminal paper has not convincingly found evidence of out-of-sample exchange rate predictability. This paper extends the conventional set of models of exchange rate determination by investigating predictability of models that incorporate Taylor rule fundamentals. We find evidence of short term predictability for 11 out of 12 currencies vis-a-vis the U.S. dollar over the post-Bretton Woods float, with the strongest evidence coming from specifications that incorporate heterogeneous coefficients and interest rate smoothing. The evidence of predictability is much stronger with Taylor rule models than with conventional interest rate, purchasing power parity, or monetary models.

Journal ArticleDOI
TL;DR: Comparative effectiveness research in the form of nonrandomized studies using secondary databases can be designed with rigorous elements and conducted with sophisticated statistical methods to improve causal inference of treatment effects.

Journal ArticleDOI
TL;DR: A “linearizer” circuit was constructed by adding TetR autoregulation to the original cascade and observed a massive (7-fold) reduction of noise at intermediate induction and linearization of dose–response before saturation, indicating that linearization is highly robust to parameter variations.
Abstract: Although several recent studies have focused on gene autoregulation, the effects of negative feedback (NF) on gene expression are not fully understood. Our purpose here was to determine how the strength of NF regulation affects the characteristics of gene expression in yeast cells harboring chromosomally integrated transcriptional cascades that consist of the yEGFP reporter controlled by (i) the constitutively expressed tetracycline repressor TetR or (ii) TetR repressing its own expression. Reporter gene expression in the cascade without feedback showed a steep (sigmoidal) dose–response and a wide, nearly bimodal yEGFP distribution, giving rise to a noise peak at intermediate levels of induction. We developed computational models that reproduced the steep dose–response and the noise peak and predicted that negative autoregulation changes reporter expression from bimodal to unimodal and transforms the dose–response from sigmoidal to linear. Prompted by these predictions, we constructed a “linearizer” circuit by adding TetR autoregulation to our original cascade and observed a massive (7-fold) reduction of noise at intermediate induction and linearization of dose–response before saturation. A simple mathematical argument explained these findings and indicated that linearization is highly robust to parameter variations. These findings have important implications for gene expression control in eukaryotic cells, including the design of synthetic expression systems.

Journal ArticleDOI
TL;DR: In this paper, the authors estimate the risk of market skewness from daily S&P 500 index option data and find that the risk is between -6.00% and -8.40% annually.
Abstract: The cross-section of stock returns has substantial exposure to risk captured by higher moments in market returns. We estimate these moments from daily S&P 500 index option data. The resulting time series of factors are thus genuinely conditional and forward-looking. Stocks with high sensitivities to innovations in implied market volatility and skewness exhibit low returns on average, whereas those with high sensitivities to innovations in implied market kurtosis exhibit high returns on average. The results on market skewness risk are extremely robust to various permutations of the empirical setup. The estimated premium for bearing market skewness risk is between -6.00% and -8.40% annually. This market skewness risk premium is economically significant and cannot be explained by other common risk factors such as the market excess return or the size, book-to-market, momentum, and market volatility factors. Using ICAPM intuition, the negative price of market skewness risk indicates that it is a state variable that negatively affects the future investment opportunity set.

Journal ArticleDOI
TL;DR: In this paper, the authors examined the relationship among consumption emotions, customer satisfaction, switching barriers, and revisit intention in a full-service restaurant setting and found that multiple components of consumption emotions significantly affected customer satisfaction and satisfaction mediated the effect of emotion factors on revisit intention.

Posted Content
TL;DR: In this article, the authors revisited findings that returns are negatively related to financial distress intensity and leverage, and showed that return premiums to low leverage and low distress are significant in raw returns, and even stronger in risk-adjusted returns.
Abstract: We revisit findings that returns are negatively related to financial distress intensity and leverage. These are puzzles under frictionless capital markets assumptions, but consistent with optimizing firms that differ in their exposure to financial distress costs. Firms with high costs choose low leverage to avoid distress, but retain exposure to the systematic risk of bearing such costs in low states. Empirical results are consistent with this explanation. The return premiums to low leverage and low distress are significant in raw returns, and even stronger in risk-adjusted returns. When in distress, low leverage firms suffer more than high leverage firms as measured by a deterioration in accounting operating performance and heightened exposure to systematic risk. The connection between return premiums and distress costs is apparent in subperiod evidence—both are small or insignificant prior to 1980 and larger and significant thereafter.