scispace - formally typeset
Search or ask a question

Showing papers by "Arizona State University published in 2005"


Journal ArticleDOI
18 Nov 2005-Science
TL;DR: Covalent organic frameworks (COFs) have been designed and successfully synthesized by condensation reactions of phenyl diboronic acid and hexahydroxytriphenylene to form rigid porous architectures with pore sizes ranging from 7 to 27 angstroms.
Abstract: Covalent organic frameworks (COFs) have been designed and successfully synthesized by condensation reactions of phenyl diboronic acid {C6H4[B(OH)2]2} and hexahydroxytriphenylene [C18H6(OH)6]. Powder x-ray diffraction studies of the highly crystalline products (C3H2BO)6.(C9H12)1 (COF-1) and C9H4BO2 (COF-5) revealed expanded porous graphitic layers that are either staggered (COF-1, P6(3)/mmc) or eclipsed (COF-5, P6/mmm). Their crystal structures are entirely held by strong bonds between B, C, and O atoms to form rigid porous architectures with pore sizes ranging from 7 to 27 angstroms. COF-1 and COF-5 exhibit high thermal stability (to temperatures up to 500 degrees to 600 degrees C), permanent porosity, and high surface areas (711 and 1590 square meters per gram, respectively).

4,843 citations


Journal ArticleDOI
TL;DR: With the categorizing framework, the efforts toward-building an integrated system for intelligent feature selection are continued, and an illustrative example is presented to show how existing feature selection algorithms can be integrated into a meta algorithm that can take advantage of individual algorithms.
Abstract: This paper introduces concepts and algorithms of feature selection, surveys existing feature selection algorithms for classification and clustering, groups and compares different algorithms with a categorizing framework based on search strategies, evaluation criteria, and data mining tasks, reveals unattempted combinations, and provides guidelines in selecting feature selection algorithms. With the categorizing framework, we continue our efforts toward-building an integrated system for intelligent feature selection. A unifying platform is proposed as an intermediate step. An illustrative example is presented to show how existing feature selection algorithms can be integrated into a meta algorithm that can take advantage of individual algorithms. An added advantage of doing so is to help a user employ a suitable algorithm without knowing details of each algorithm. Some real-world applications are included to demonstrate the use of feature selection in data mining. We conclude this work by identifying trends and challenges of feature selection research and development.

2,605 citations


Journal ArticleDOI
TL;DR: NVS, the Newest Vital Sign, is suitable for use as a quick screening test for limited literacy in primary health care settings and correlates with the Test of Functional Health Literacy in Adults.
Abstract: PURPOSE Current health literacy screening instruments for health care settings are either too long for routine use or available only in English. Our objective was to develop a quick and accurate screening test for limited literacy available in Eng- lish and Spanish. METHODS We administered candidate items for the new instrument and also the Test of Functional Health Literacy in Adults (TOFHLA) to English-speaking and Spanish-speaking primary care patients. We measured internal consistency with Cronbach's and assessed criterion validity by measuring correlations with TOFHLA scores. Using TOFLHA scores 0.76 in English and 0.69 in Spanish) and correlates with the TOFHLA. Area under the ROC curve is 0.88 for English and 0.72 for Spanish ver- sions. Patients with more than 4 correct responses are unlikely to have low literacy, whereas fewer than 4 correct answers indicate the possibility of limited literacy. CONCLUSION NVS is suitable for use as a quick screening test for limited literacy in primary health care settings.

1,941 citations


Journal ArticleDOI
01 Jan 2005
TL;DR: Unemployed individuals had lower psychological and physical well-being than did their employed counterparts, and work-role centrality, coping resources, cognitive appraisals, and coping strategies displayed stronger relationships with mental health than did human capital or demographic variables.
Abstract: The authors used theoretical models to organize the diverse unemployment literature, and meta-analytic techniques were used to examine the impact of unemployment on worker well-being across 104 empirical studies with 437 effect sizes. Unemployed individuals had lower psychological and physical well-being than did their employed counterparts. Unemployment duration and sample type (school leaver vs. mature unemployed) moderated the relationship between mental health and unemployment, but the current unemployment rate and the amount of unemployment benefits did not. Within unemployed samples, work-role centrality, coping resources (personal, social, financial, and time structure), cognitive appraisals, and coping strategies displayed stronger relationships with mental health than did human capital or demographic variables. The authors identify gaps in the literature and propose directions for future unemployment research.

1,889 citations


Journal ArticleDOI
TL;DR: A standarized measure of genetic differentiation is introduced here, one which has the same range, 0–1, for all levels of genetic variation, and allows comparison between loci with different levels of Genetic variation.
Abstract: Interpretation of genetic differentiation values is often problematic because of their dependence on the level of genetic variation. For example, the maximum level of GST is less than the average within population homozygosity so that for highly variable loci, even when no alleles are shared between subpopulations, GST may be low. To remedy this difficulty, a standardized measure of genetic differentiation is introduced here, one which has the same range, 0–1, for all levels of genetic variation. With this measure, the magnitude is the proportion of the maximum differentiation possible for the level of subpopulation homozygosity observed. This is particularly important for situations in which the mutation rate is of the same magnitude or higher than the rate of gene flow. The standardized measure allows comparison between loci with different levels of genetic variation, such as allozymes and microsatellite loci, or mtDNA and Y-chromosome genes, and for genetic differentiation for organisms with d...

1,707 citations


Posted Content
TL;DR: A review and introduction to the Special Issue on Strategy Research in Emerging Economies as mentioned in this paper considers the nature of theoretical contributions thus far on strategy in emerging economies and classify the research through four strategic options: (1) firms from developed economies entering emerging economies; (2) domestic firms competing within emerging economies, (3), firms from emerging economies entering other emerging economies.
Abstract: This review and introduction to the Special Issue on 'Strategy Research in Emerging Economies' considers the nature of theoretical contributions thus far on strategy in emerging economies. We classify the research through four strategic options: (1) firms from developed economies entering emerging economies; (2) domestic firms competing within emerging economies; (3) firms from emerging economies entering other emerging economies; and (4) firms from emerging economies entering developed economies. Among the four perspectives examined (institutional theory, transaction cost theory, resource-based theory, and agency theory), the most dominant seems to be institutional theory. Most existing studies that make a contribution blend institutional theory with one of the other three perspectives, including seven out of the eight papers included in this Special Issue. We suggest a future research agenda based around the four strategies and four theoretical perspectives. Given the relative emphasis of research so far on the first and second strategic options, we believe that there is growing scope for research that addresses the third and fourth.

1,670 citations


Journal ArticleDOI
TL;DR: In this paper, the authors explore key factors that influence the initial self-service technology trial decision, specifically focusing on actual behavior in situations in which the consumer has a choice among delivery modes, and show that the consumer readiness variables of role clarity, motivation, and ability are key mediators between established adoption constructs (innovation characteristics and individual differences) and the likelihood of trial.
Abstract: Electronic commerce is an increasingly popular business model with a wide range of tools available to firms. An application that is becoming more common is the use of self-service technologies (SSTs), such as telephone banking, automated hotel checkout, and online investment trading, whereby customers produce services for themselves without assistance from firm employees. Widespread introduction of SSTs is apparent across industries, yet relatively little is known about why customers decide to try SSTs and why some SSTs are more widely accepted than others. In this research, the authors explore key factors that influence the initial SST trial decision, specifically focusing on actual behavior in situations in which the consumer has a choice among delivery modes. The authors show that the consumer readiness variables of role clarity, motivation, and ability are key mediators between established adoption constructs (innovation characteristics and individual differences) and the likelihood of trial.

1,594 citations


Journal ArticleDOI
TL;DR: A review and introduction to the Special Issue on Strategy Research in Emerging Economies as mentioned in this paper considers the nature of theoretical contributions thus far on strategy in emerging economies and classify the research through four strategic options: (1) firms from developed economies entering emerging economies; (2) domestic firms competing within emerging economies, (3), firms from emerging economies entering other emerging economies.
Abstract: This review and introduction to the Special Issue on ‘Strategy Research in Emerging Economies’ considers the nature of theoretical contributions thus far on strategy in emerging economies. We classify the research through four strategic options: (1) firms from developed economies entering emerging economies; (2) domestic firms competing within emerging economies; (3) firms from emerging economies entering other emerging economies; and (4) firms from emerging economies entering developed economies. Among the four perspectives examined (institutional theory, transaction cost theory, resource-based theory, and agency theory), the most dominant seems to be institutional theory. Most existing studies that make a contribution blend institutional theory with one of the other three perspectives, including seven out of the eight papers included in this Special Issue. We suggest a future research agenda based around the four strategies and four theoretical perspectives. Given the relative emphasis of research so far on the first and second strategic options, we believe that there is growing scope for research that addresses the third and fourth.

1,540 citations


Journal ArticleDOI
TL;DR: Results of the Monte Carlo simulation indicated that measurement model misspecification can inflate unstandardized structural parameter estimates by as much as 400% or deflate them by asMuch as 80% and lead to Type I or Type II errors of inference, depending on whether the exogenous or the endogenous latent construct is misspecified.
Abstract: The purpose of this study was to review the distinction between formative- and reflective-indicator measurement models, articulate a set of criteria for deciding whether measures are formative or reflective, illustrate some commonly researched constructs that have formative indicators, empirically test the effects of measurement model misspecification using a Monte Carlo simulation, and recommend new scale development procedures for latent constructs with formative indicators. Results of the Monte Carlo simulation indicated that measurement model misspecification can inflate unstandardized structural parameter estimates by as much as 400% or deflate them by as much as 80% and lead to Type I or Type II errors of inference, depending on whether the exogenous or the endogenous latent construct is misspecified. Implications of this research are discussed.

1,528 citations


Journal ArticleDOI
TL;DR: Conventional treatment would have low removal of many EDC/PPCPs, while addition of PAC and/or ozone could substantially improve their removals, and existing strategies that predict relative removal of herbicides, pesticides, and other organic pollutants can be directly applied.
Abstract: The potential occurrence of endocrine-disrupting compounds (EDCs) as well as pharmaceuticals and personal care products (PPCPs) in drinking water supplies raises concern over the removal of these compounds by common drinking water treatment processes. Three drinking water supplies were spiked with 10 to 250 ng/L of 62 different EDC/PPCPs; one model water containing an NOM isolate was spiked with 49 different EDC/PPCPs. Compounds were detected by LC/MS/MS or GC/MS/MS. These test waters were subjected to bench-scale experimentation to simulate individual treatment processes in a water treatment plant (WTP). Aluminum sulfate and ferric chloride coagulants or chemical lime softening removed some polyaromatic hydrocarbons (PAHs) but removed 98% of GC/MS/MS compounds (more volatile) and 10% to >95% of LC/MS/MS compounds (more polar); higher PAC dosages improved EDC/PPCP removal. EDC/PPCP per...

1,433 citations


Journal ArticleDOI
TL;DR: The authors investigated residents' perceptions of tourism's impact on communities and found that those who feel tourism is important for economic development, benefit from it, and are knowledgeable about the greater positive impacts, but do not differ from others with respect to perceptions of tourists negative consequences.

Journal ArticleDOI
TL;DR: The development of the method of particle image velocimetry (PIV) is traced by describing some of the milestones that have enabled new and/or better measurements to be made.
Abstract: The development of the method of particle image velocimetry (PIV) is traced by describing some of the milestones that have enabled new and/or better measurements to be made. The current status of PIV is summarized, and some goals for future advances are addressed.

Journal ArticleDOI
TL;DR: These Standards will inform efforts in the field to find prevention Programs and policies that are of proven efficacy, effectiveness, or readiness for adoption and will guide prevention scientists as they seek to discover, research, and bring to the field new prevention programs and policies.
Abstract: Ever increasing demands for accountability, together with the proliferation of lists of evidence-based prevention programs and policies, led the Society for Prevention Research to charge a committee with establishing standards for identifying effective prevention programs and policies. Recognizing that interventions that are effective and ready for dissemination are a subset of effective programs and policies, and that effective programs and policies are a subset of efficacious interventions, SPR’s Standards Committee developed overlapping sets of standards. We designed these Standards to assist practitioners, policy makers, and administrators to determine which interventions are efficacious, which are effective, and which are ready for dissemination. Under these Standards, an efficacious intervention will have been tested in at least two rigorous trials that (1) involved defined samples from defined populations, (2) used psychometrically sound measures and data collection procedures; (3) analyzed their data with rigorous statistical approaches; (4) showed consistent positive effects (without serious iatrogenic effects); and (5) reported at least one significant long-term follow-up. An effective intervention under these Standards will not only meet all standards for efficacious interventions, but also will have (1) manuals, appropriate training, and technical support available to allow third parties to adopt and implement the intervention; (2) been evaluated under real-world conditions in studies that included sound measurement of the level of implementation and engagement of the target audience (in both the intervention and control conditions); (3) indicated the practical importance of intervention outcome effects; and (4) clearly demonstrated to whom intervention findings can be generalized. An intervention recognized as ready for broad dissemination under these Standards will not only meet all standards for efficacious and effective interventions, but will also provide (1) evidence of the ability to “go to scale”; (2) clear cost information; and (3) monitoring and evaluation tools so that adopting agencies can monitor or evaluate how well the intervention works in their settings. Finally, the Standards Committee identified possible standards desirable for current and future areas of prevention science as the field develops. If successful, these Standards will inform efforts in the field to find prevention programs and policies that are of proven efficacy, effectiveness, or readiness for adoption and will guide prevention scientists as they seek to discover, research, and bring to the field new prevention programs and policies.

Journal ArticleDOI
TL;DR: Compared to the direct transmission and traditional multihop protocols, the results reveal that optimum relay channel signaling can significantly outperform multihip protocols, and that power allocation has a significant impact on the performance.
Abstract: We consider three-node wireless relay channels in a Rayleigh-fading environment. Assuming transmitter channel state information (CSI), we study upper bounds and lower bounds on the outage capacity and the ergodic capacity. Our studies take into account practical constraints on the transmission/reception duplexing at the relay node and on the synchronization between the source node and the relay node. We also explore power allocation. Compared to the direct transmission and traditional multihop protocols, our results reveal that optimum relay channel signaling can significantly outperform multihop protocols, and that power allocation has a significant impact on the performance.

Journal ArticleDOI
TL;DR: In this article, the authors re-examine the relation between firm value and board structure and the factors associated with cross-sectional variation in board structure, and find that firms with high advising requirements have larger boards and higher fraction of insiders on the board.
Abstract: This paper re-examines (1) the relation between firm value and board structure and (2) the factors associated with cross-sectional variation in board structure. Conventional wisdom and existing empirical research suggest that firm value decreases as the size of the firm's board increases, and as the fraction of insiders on the board increases. In this paper, we argue that, contrary to conventional wisdom, some firms may benefit from having larger boards and greater fraction of insiders on the board. Outside directors serve both to monitor top management and to advise the CEO on business strategy. The monitoring role of the board has been studied extensively and the general consensus is that smaller boards are more effective at monitoring. The argument is that smaller groups are more cohesive, more productive, and can monitor the firm more effectively whereas large groups are fraught with problems such as social loafing and higher co-ordination costs. The advisory role of the board, however, has received far less attention. Since one function of board members is to provide advice and counsel to the CEO, we hypothesize that firms that require more advice (more complex firms) will need larger boards. In particular, we hypothesize that larger firms, diversified firms, and firms that rely more on debt financing, will derive greater firm value from having larger boards. Similarly, certain kinds of firms might benefit from higher insider representation on the board. Inside directors possess more firm-specific knowledge. Thus we conjecture that firms for which the firm-specific knowledge of insiders is relatively important, such as R&D-intensive firms, may derive greater value from having higher fraction of insiders on the board. Our findings are consistent with our hypotheses. For firms that have greater advising requirements, such as those that are large, diversified across industries, and rely more on debt financing, we find that Tobin's Q increases in board size. Furthermore, in firms for which the firm-specific knowledge of insiders is relatively important, such as R&D-intensive firms, Tobin's Q increases with the fraction of insiders on the board. Firms with high advising requirements have larger boards. Also, firms with high R&D have larger fraction of insiders on the board. These results challenge the notion that exchange listing requirements, mandates from institutional investors, and restrictions in the law, specifically those that limit board size and management representation on the board, necessarily enhance firm value.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated empirically the relation between the value creation efficiency and firms' market valuation and financial performance, and found that firms' intellectual capital has a positive impact on market value and financial performances, and may be an indicator for future financial performance.
Abstract: Purpose – The purpose of this article is to investigate empirically the relation between the value creation efficiency and firms’ market valuation and financial performance.Design/methodology/approach – Using data drawn from Taiwanese listed companies and Pulic's Value Added Intellectual Coefficient (VAIC™) as the efficiency measure of capital employed and intellectual capital, the authors construct regression models to examine the relationship between corporate value creation efficiency and firms’ market‐to‐book value ratios, and explore the relation between intellectual capital and firms’ current as well as future financial performance.Findings – The results support the hypothesis that firms’ intellectual capital has a positive impact on market value and financial performance, and may be an indicator for future financial performance. In addition, the authors found investors may place different value on the three components of value creation efficiency (physical capital, human capital, and structural cap...

Journal ArticleDOI
31 Mar 2005-Nature
TL;DR: This work directly measures electronic couplings in a molecular complex, the Fenna–Matthews–Olson photosynthetic light-harvesting protein, and finds distinct energy transport pathways that depend sensitively on the detailed spatial properties of the delocalized excited-state wavefunctions of the whole pigment–protein complex.
Abstract: Time-resolved optical spectroscopy is widely used to study vibrational and electronic dynamics by monitoring transient changes in excited state populations on a femtosecond timescale1. Yet the fundamental cause of electronic and vibrational dynamics—the coupling between the different energy levels involved—is usually inferred only indirectly. Two-dimensional femtosecond infrared spectroscopy based on the heterodyne detection of three-pulse photon echoes2,3,4,5,6,7 has recently allowed the direct mapping of vibrational couplings, yielding transient structural information. Here we extend the approach to the visible range3,8 and directly measure electronic couplings in a molecular complex, the Fenna–Matthews–Olson photosynthetic light-harvesting protein9,10. As in all photosynthetic systems, the conversion of light into chemical energy is driven by electronic couplings that ensure the efficient transport of energy from light-capturing antenna pigments to the reaction centre11. We monitor this process as a function of time and frequency and show that excitation energy does not simply cascade stepwise down the energy ladder. We find instead distinct energy transport pathways that depend sensitively on the detailed spatial properties of the delocalized excited-state wavefunctions of the whole pigment–protein complex.

Journal ArticleDOI
TL;DR: In this paper, the authors examine whether supplier involvement in new product development can produce significant improvements in financial returns and/or product design performance and test these proposed relationships using survey data collected from a group of global organizations and find support for the relationships based on the results of a multiple regression analysis.

Posted Content
TL;DR: In this article, the authors describe a method for data assimilation in large, spatio-temporally chaotic systems, in which the state estimate and its approximate uncertainty are represented at any given time by an ensemble of system states.
Abstract: Data assimilation is an iterative approach to the problem of estimating the state of a dynamical system using both current and past observations of the system together with a model for the system's time evolution. Rather than solving the problem from scratch each time new observations become available, one uses the model to ``forecast'' the current state, using a prior state estimate (which incorporates information from past data) as the initial condition, then uses current data to correct the prior forecast to a current state estimate. This Bayesian approach is most effective when the uncertainty in both the observations and in the state estimate, as it evolves over time, are accurately quantified. In this article, we describe a practical method for data assimilation in large, spatiotemporally chaotic systems. The method is a type of ``ensemble Kalman filter'', in which the state estimate and its approximate uncertainty are represented at any given time by an ensemble of system states. We discuss both the mathematical basis of this approach and its implementation; our primary emphasis is on ease of use and computational speed rather than improving accuracy over previously published approaches to ensemble Kalman filtering. We include some numerical results demonstrating the efficiency and accuracy of our implementation for assimilating real atmospheric data with the global forecast model used by the U.S. National Weather Service.

Journal ArticleDOI
TL;DR: In this paper, a parsimonious model relating a firm's price per share to, (i), next year expected earnings per share (or 12 months forward eps), (ii), short-term growth (FY-2 versus FY- l) in eps, (iii), long-term (asymptotic) growth in ePs, and, (iv), cost-of-equity capital.
Abstract: This paper develops a parsimonious model relating a firm’s price per share to, (i), next year expected earnings per share (or 12 months forward eps), (ii), short-term growth (FY-2 versus FY- l) in eps, (iii), long-term (asymptotic) growth in eps, and, (iv), cost-of-equity capital. The model assumes that the present value of dividends per share (dps) determines price, but it does not restrict how the dps-sequence is expected to evolve. All of these aspects of the model contrast sharply with the standard (Gordon/Williams) text-book approach, which equates the growth rates of expected eps and dps and fixes the growth rate and the payout rate. Though the constant growth model arises as a peculiar special case, the analysis in this paper rests on more general principles, including dividend policy irrelevancy. A second key result inverts the valuation formula to show how one expresses cost-of-capital as a function of the forward eps to price ratio and the two measures of growth in expected eps. This expression generalizes the text-book equation in which cost-of-capital equals the dps-yield plus the growth in expected eps.

Journal ArticleDOI
TL;DR: Overall, MDD and ND individuals exhibited similar baseline and stress cortisol levels, but MDD patients had much higher cortisol levels during the recovery period than their ND counterparts, and blunted reactivity-impaired recovery pattern observed among the afternoon studies was most pronounced in studies with older and more severely depressed patients.

Journal ArticleDOI
TL;DR: In this paper, the authors focus on the issue of knowledge sharing, one of the key mechanisms by which knowledge transfer can take place within organizations, and identify the people managements.
Abstract: This paper focuses on the issue of knowledge sharing, one of the key mechanisms by which knowledge transfer can take place within organizations. The aim of the paper is to identify the people manag...

Journal ArticleDOI
TL;DR: The authors suggest that the traditional conception of prejudice--as a general attitude or evaluation--can problematically obscure the rich texturing of emotions that people feel toward different groups.
Abstract: The authors suggest that the traditional conception of prejudice--as a general attitude or evaluation--can problematically obscure the rich texturing of emotions that people feel toward different groups. Derived from a sociofunctional approach, the authors predicted that groups believed to pose qualitatively distinct threats to in-group resources or processes would evoke qualitatively distinct and functionally relevant emotional reactions. Participants' reactions to a range of social groups provided a data set unique in the scope of emotional reactions and threat beliefs explored. As predicted, different groups elicited different profiles of emotion and threat reactions, and this diversity was often masked by general measures of prejudice and threat. Moreover, threat and emotion profiles were associated with one another in the manner predicted: Specific classes of threat were linked to specific, functionally relevant emotions, and groups similar in the threat profiles they elicited were also similar in the emotion profiles they elicited.

Journal ArticleDOI
TL;DR: In this paper, measurement invariance in a second-order factor model using a quality-of-life dataset (n = 924) was tested across two groups at a set of hierarchically structured levels.
Abstract: We illustrate testing measurement invariance in a second-order factor model using a quality of life dataset (n = 924). Measurement invariance was tested across 2 groups at a set of hierarchically structured levels: (a) configural invariance, (b) first-order factor loadings, (c) second-order factor loadings, (d) intercepts of measured variables, (e) intercepts of first-order factors, (f) disturbances of first-order factors, and (g) residual variances of observed variables. Given that measurement invariance at the factor loading and intercept levels was achieved, the latent factor mean difference on the higher order factor between the groups was also estimated. The analyses were performed on the mean and covariance structures within the framework of the confirmatory factor analysis using the LISREL 8.51 program. Implications of second-order factor models and measurement invariance in psychological research were discussed.

Journal ArticleDOI
TL;DR: In this article, a complete sample of seven luminous early-type galaxies in the Hubble Ultra Deep Field (UDF) with spectroscopic redshifts between 1.39 and 2.47 was reported.
Abstract: We report on a complete sample of seven luminous early-type galaxies in the Hubble Ultra Deep Field (UDF) with spectroscopic redshifts between 1.39 and 2.47, and to KAB 1:4. Low-resolution spectra of these objects have been extracted from the Hubble Space Telescope (HST) ACS grism data taken over the UDF by the Grism ACS Program for Extragalactic Science (GRAPES) project. Redshifts for the seven galaxies have been identified based on the UV feature at rest frame 2640 < k < 2850 8. This feature is mainly due to a combination of Fe ii ,M gi ,a nd Mgii absorptions, which are characteristic of stellar populations dominated by stars older than � 0.5 Gyr. The redshift identification and the passively evolvingnatureofthesegalaxiesisfurthersupportedbythephotometricredshiftsandbytheoverallspectralenergy distribution (SED), with the ultradeep HST ACS NICMOS imaging revealing compact morphologies typical of

Journal ArticleDOI
TL;DR: It is somewhat surprising that the upper bound can meet the lower bound under certain regularity conditions (not necessarily degradedness), and therefore the capacity can be characterized exactly; previously this has been proven only for the degraded Gaussian relay channel.
Abstract: We study the capacity of multiple-input multiple- output (MIMO) relay channels. We first consider the Gaussian MIMO relay channel with fixed channel conditions, and derive upper bounds and lower bounds that can be obtained numerically by convex programming. We present algorithms to compute the bounds. Next, we generalize the study to the Rayleigh fading case. We find an upper bound and a lower bound on the ergodic capacity. It is somewhat surprising that the upper bound can meet the lower bound under certain regularity conditions (not necessarily degradedness), and therefore the capacity can be characterized exactly; previously this has been proven only for the degraded Gaussian relay channel. We investigate sufficient conditions for achieving the ergodic capacity; and in particular, for the case where all nodes have the same number of antennas, the capacity can be achieved under certain signal-to-noise ratio (SNR) conditions. Numerical results are also provided to illustrate the bounds on the ergodic capacity of the MIMO relay channel over Rayleigh fading. Finally, we present a potential application of the MIMO relay channel for cooperative communications in ad hoc networks.

Journal ArticleDOI
TL;DR: Through an order-of-magnitude analysis of various possible mechanisms, convection caused by the Brownian movement of these nanoparticles is primarily responsible for the enhancement in k of these colloidal nanofluids.
Abstract: Researchers have been perplexed for the past five years with the unusually high thermal conductivity (k) of nanoparticle-laden colloidal solutions (nanofluids). Although various mechanisms and models have been proposed in the literature to explain the high k of these nanofluids, no concrete conclusions have been reached. Through an order-of-magnitude analysis of various possible mechanisms, we show that convection caused by the Brownian movement of these nanoparticles is primarily responsible for the enhancement in k of these colloidal nanofluids.

Journal ArticleDOI
TL;DR: A sleep duration of 6 hours or less or 9 hours or more is associated with increased prevalence of DM and IGT, and voluntary sleep restriction may contribute to the large public health burden of DM.
Abstract: Results: The median sleep time was 7 hours per night, with 27.1% of subjects sleeping 6 hours or less per night. Compared with those sleeping 7 to 8 hours per night, subjects sleeping 5 hours or less and 6 hours per night had adjusted odds ratios for DM of 2.51 (95% confidence interval, 1.57-4.02) and 1.66 (95% confidence interval, 1.15-2.39), respectively. Adjusted odds ratios for IGT were 1.33 (95% confidence interval, 0.832.15) and 1.58 (95% confidence interval, 1.15-2.18), respectively. Subjects sleeping 9 hours or more per night also had increased odds ratios for DM and IGT. These associations persisted when subjects with insomnia symptoms were excluded. Conclusions: A sleep duration of 6 hours or less or 9 hours or more is associated with increased prevalence of DM and IGT. Because this effect was present in subjects without insomnia, voluntary sleep restriction may contribute to the large public health burden of DM. Arch Intern Med. 2005;165:863-868

Journal ArticleDOI
TL;DR: Tertiary macrofossils of the flowering plant family Leguminosae were used as time constraints to estimate ages of the earliest branching clades identified in separate plastid matK and rbcL gene phylogenies, pointing to a rapid family-wide diversification, and predict few if any legume fossils prior to the Cenozoic.
Abstract: Tertiary macrofossils of the flowering plant family Leguminosae (legumes) were used as time constraints to estimate ages of the earliest branching clades identified in separate plastid matK and rbcL gene phylogenies. Penalized likelihood rate smoothing was performed on sets of Bayesian likelihood trees generated with the AIC-selected GTR+ Gamma +I substitution model. Unequivocal legume fossils dating from the Recent continuously back to about 56 million years ago were used to fix the family stem clade at 60 million years (Ma), and at 1-Ma intervals back to 70 Ma. Specific fossils that showed distinctive combinations of apomorphic traits were used to constrain the minimum age of 12 specific internal nodes. These constraints were placed on stem rather than respective crown clades in order to bias for younger age estimates. Regardless, the mean age of the legume crown clade differs by only 1.0 to 2.5 Ma from the fixed age of the legume stem clade. Additionally, the oldest caesalpinioid, mimosoid, and papilionoid crown clades show approximately the same age range of 39 to 59 Ma. These findings all point to a rapid family-wide diversification, and predict few if any legume fossils prior to the Cenozoic. The range of the matK substitution rate, 2.1-24.6 x 10(-10) substitutions per site per year, is higher than that of rbcL, 1.6- 8.6 x 10(-10), and is accompanied by more uniform rate variation among codon positions. The matK and rbcL substitution rates are highly correlated across the legume family. For example, both loci have the slowest substitution rates among the mimosoids and the fastest rates among the millettioid legumes. This explains why groups such as the millettioids are amenable to species-level phylogenetic analysis with these loci, whereas other legume groups are not.

Journal ArticleDOI
TL;DR: A general, weighted fail‐safe calculation, grounded in the meta‐analysis framework, applicable to both fixed‐ and random‐effects models, is proposed.
Abstract: Quantitative literature reviews such as meta-analysis are becoming common in evolutionary biology but may be strongly affected by publication biases. Using fail-safe numbers is a quick way to estimate whether publication bias is likely to be a problem for a specific study. However, previously suggested fail-safe calculations are unweighted and are not based on the framework in which most meta-analyses are performed. A general, weighted fail-safe calculation, grounded in the meta-analysis framework, applicable to both fixed- and random-effects models, is proposed. Recent meta-analyses published in Evolution are used for illustration.