scispace - formally typeset
Search or ask a question

Showing papers by "University of Texas at Arlington published in 2006"


Journal ArticleDOI
TL;DR: A new taxonomy of living amphibians is proposed to correct the deficiencies of the old one, based on the largest phylogenetic analysis of living Amphibia so far accomplished, and many subsidiary taxa are demonstrated to be nonmonophyletic.
Abstract: The evidentiary basis of the currently accepted classification of living amphibians is discussed and shown not to warrant the degree of authority conferred on it by use and tradition. A new taxonomy of living amphibians is proposed to correct the deficiencies of the old one. This new taxonomy is based on the largest phylogenetic analysis of living Amphibia so far accomplished. We combined the comparative anatomical character evidence of Haas (2003) with DNA sequences from the mitochondrial transcription unit H1 (12S and 16S ribosomal RNA and tRNAValine genes, ≈ 2,400 bp of mitochondrial sequences) and the nuclear genes histone H3, rhodopsin, tyrosinase, and seven in absentia, and the large ribosomal subunit 28S (≈ 2,300 bp of nuclear sequences; ca. 1.8 million base pairs; x = 3.7 kb/terminal). The dataset includes 532 terminals sampled from 522 species representative of the global diversity of amphibians as well as seven of the closest living relatives of amphibians for outgroup comparisons. The...

1,994 citations


Journal ArticleDOI
TL;DR: In this article, the authors developed and tested a conceptual framework, which predicts that customer satisfaction partially mediates the relationship between CSR and firm market value (i.e., Tobin's q and stock return), and corporate abilities (innovativeness capability and product quality) moderate the financial returns to CSR, and these moderated relationships are mediated by customer satisfaction.
Abstract: Although prior research has addressed the influence of corporate social responsibility (CSR) on perceived customer responses, it is not clear whether CSR affects market value of the firm. This study develops and tests a conceptual framework, which predicts that (1) customer satisfaction partially mediates the relationship between CSR and firm market value (i.e., Tobin's q and stock return), (2) corporate abilities (innovativeness capability and product quality) moderate the financial returns to CSR, and (3) these moderated relationships are mediated by customer satisfaction. Based on a large-scale secondary data set, the results show support for this framework. Notably, the authors find that in firms with low innovativeness capability, CSR actually reduces customer satisfaction levels and, through the lowered satisfaction, harms market value. The uncovered mediated and asymmetrically moderated results offer important implications for marketing theory and practice.

1,921 citations


Journal ArticleDOI
04 Jan 2006-JAMA
TL;DR: Because of better survival after asystole and PEA, children had better outcomes than adults despite fewer cardiac arrests due to VF or pulseless VT, according to this multicenter registry of in-hospital cardiac arrest.
Abstract: ContextCardiac arrests in adults are often due to ventricular fibrillation (VF) or pulseless ventricular tachycardia (VT), which are associated with better outcomes than asystole or pulseless electrical activity (PEA). Cardiac arrests in children are typically asystole or PEA.ObjectiveTo test the hypothesis that children have relatively fewer in-hospital cardiac arrests associated with VF or pulseless VT compared with adults and, therefore, worse survival outcomes.Design, Setting, and PatientsA prospective observational study from a multicenter registry (National Registry of Cardiopulmonary Resuscitation) of cardiac arrests in 253 US and Canadian hospitals between January 1, 2000, and March 30, 2004. A total of 36 902 adults (≥18 years) and 880 children (<18 years) with pulseless cardiac arrests requiring chest compressions, defibrillation, or both were assessed. Cardiac arrests occurring in the delivery department, neonatal intensive care unit, and in the out-of-hospital setting were excluded.Main Outcome MeasureSurvival to hospital discharge.ResultsThe rate of survival to hospital discharge following pulseless cardiac arrest was higher in children than adults (27% [236/880] vs 18% [6485/36 902]; adjusted odds ratio [OR], 2.29; 95% confidence interval [CI], 1.95-2.68). Of these survivors, 65% (154/236) of children and 73% (4737/6485) of adults had good neurological outcome. The prevalence of VF or pulseless VT as the first documented pulseless rhythm was 14% (120/880) in children and 23% (8361/36 902) in adults (OR, 0.54; 95% CI, 0.44-0.65; P<.001). The prevalence of asystole was 40% (350) in children and 35% (13 024) in adults (OR, 1.20; 95% CI, 1.10-1.40; P = .006), whereas the prevalence of PEA was 24% (213) in children and 32% (11 963) in adults (OR, 0.67; 95% CI, 0.57-0.78; P<.001). After adjustment for differences in preexisting conditions, interventions in place at time of arrest, witnessed and/or monitored status, time to defibrillation of VF or pulseless VT, intensive care unit location of arrest, and duration of cardiopulmonary resuscitation, only first documented pulseless arrest rhythm remained significantly associated with differential survival to discharge (24% [135/563] in children vs 11% [2719/24 987] in adults with asystole and PEA; adjusted OR, 2.73; 95% CI, 2.23-3.32).ConclusionsIn this multicenter registry of in-hospital cardiac arrest, the first documented pulseless arrest rhythm was typically asystole or PEA in both children and adults. Because of better survival after asystole and PEA, children had better outcomes than adults despite fewer cardiac arrests due to VF or pulseless VT.

1,043 citations


Book
01 Jan 2006
TL;DR: This book is about objective image quality assessment to provide computational models that can automatically predict perceptual image quality and to provide new directions for future research by introducing recent models and paradigms that significantly differ from those used in the past.
Abstract: This book is about objective image quality assessmentwhere the aim is to provide computational models that can automatically predict perceptual image quality. The early years of the 21st century have witnessed a tremendous growth in the use of digital images as a means for representing and communicating information. A considerable percentage of this literature is devoted to methods for improving the appearance of images, or for maintaining the appearance of images that are processed. Nevertheless, the quality of digital images, processed or otherwise, is rarely perfect. Images are subject to distortions during acquisition, compression, transmission, processing, and reproduction. To maintain, control, and enhance the quality of images, it is important for image acquisition, management, communication, and processing systems to be able to identify and quantify image quality degradations. The goals of this book are as follows; a) to introduce the fundamentals of image quality assessment, and to explain the relevant engineering problems, b) to give a broad treatment of the current state-of-the-art in image quality assessment, by describing leading algorithms that address these engineering problems, and c) to provide new directions for future research, by introducing recent models and paradigms that significantly differ from those used in the past. The book is written to be accessible to university students curious about the state-of-the-art of image quality assessment, expert industrial R&D engineers seeking to implement image/video quality assessment systems for specific applications, and academic theorists interested in developing new algorithms for image quality assessment or using existing algorithms to design or optimize other image processing applications.

1,041 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a framework of an organization's supply chain process flexibilities as an important antecedent of its supply chain agility, and then establish the key factors that determine the flexibility attributes of the three critical processes of the supply chain.

853 citations


Journal ArticleDOI
TL;DR: A new, monophyletic taxonomy for dendrobatids is proposed, recognizing the inclusive clade as a superfamily (Dendrobatoidea) composed of two families (one of which is new), six subfamilies (three new), and 16 genera (four new).
Abstract: The known diversity of dart-poison frog species has grown from 70 in the 1960s to 247 at present, with no sign that the discovery of new species will wane in the foreseeable future. Although this growth in knowledge of the diversity of this group has been accompanied by detailed investigations of many aspects of the biology of dendrobatids, their phylogenetic relationships remain poorly understood. This study was designed to test hypotheses of dendrobatid diversification by combining new and prior genotypic and phenotypic evidence in a total evidence analysis. DNA sequences were sampled for five mitochondrial and six nuclear loci (approximately 6,100 base pairs [bp]; x¯ = 3,740 bp per terminal; total dataset composed of approximately 1.55 million bp), and 174 phenotypic characters were scored from adult and larval morphology, alkaloid profiles, and behavior. These data were combined with relevant published DNA sequences. Ingroup sampling targeted several previously unsampled species, including Ar...

577 citations


Journal ArticleDOI
TL;DR: It is demonstrated that CPPs offer the most efficacious and cost-effective, evidence-based treatment for persons with chronic pain, relative to a host of widely used conventional medical treatment.

466 citations


Journal ArticleDOI
V. M. Abazov1, Brad Abbott2, M. Abolins3, Bobby Samir Acharya4  +814 moreInstitutions (74)
TL;DR: The D0 experiment enjoyed a very successful data-collection run at the Fermilab Tevatron collider between 1992 and 1996 as discussed by the authors, and the detector has been upgraded to take advantage of improvements to the Tevoton and to enhance its physics capabilities.
Abstract: The D0 experiment enjoyed a very successful data-collection run at the Fermilab Tevatron collider between 1992 and 1996. Since then, the detector has been upgraded to take advantage of improvements to the Tevatron and to enhance its physics capabilities. We describe the new elements of the detector, including the silicon microstrip tracker, central fiber tracker, solenoidal magnet, preshower detectors, forward muon detector, and forward proton detector. The uranium/liquid-argon calorimeters and central muon detector, remaining from Run I, are discussed briefly. We also present the associated electronics, triggering, and data acquisition systems, along with the design and implementation of software specific to D0.

425 citations


Journal ArticleDOI
TL;DR: In nonpsychotic MDD outpatients without overt cognitive impairment, clinician assessment of depression severity using either the QIDS-C16 or HRSD17 may be successfully replaced by either the self-report or IVR version of the QIPS, demonstrating interchangeability among the three methods.

354 citations


Journal Article
TL;DR: The results of this study show that plyometric training can be an effective training technique to improve an athlete's agility and ground reaction times are decreased with plyometricTraining.
Abstract: The purpose of the study was to determine if six weeks of plyometric training can improve an athlete's agility. Subjects were divided into two groups, a plyometric training and a control group. The plyometric training group performed in a six week plyometric training program and the control group did not perform any plyometric training techniques. All subjects participated in two agility tests: T-test and Illinois Agility Test, and a force plate test for ground reaction times both pre and post testing. Univariate ANCOVAs were conducted to analyze the change scores (post - pre) in the independent variables by group (training or control) with pre scores as covariates. The Univariate ANCOVA revealed a significant group effect F2,26 = 25.42, p=0.0000 for the T-test agility measure. For the Illinois Agility test, a significant group effect F2,26 = 27.24, p = 0.000 was also found. The plyometric training group had quicker posttest times compared to the control group for the agility tests. A significant group effect F2,26 = 7.81, p = 0.002 was found for the Force Plate test. The plyometric training group reduced time on the ground on the posttest compared to the control group. The results of this study show that plyometric training can be an effective training technique to improve an athlete's agility. Key PointsPlyometric training can enhance agility of athletes.6 weeks of plyometric training is sufficient to see agility results.Ground reaction times are decreased with plyometric training.

353 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the effects of ethical climate on salesperson's role stress, job attitudes, turnover intention, and job performance and found that ethical climate results in lower role conflict and role ambiguity and higher satisfaction, which leads to lower turnover intention and organizational commitment.
Abstract: This study builds on previous research to investigate the effects of ethical climate on salesperson’s role stress, job attitudes, turnover intention, and job performance. Responses from 138 salespeople who work for a large retailer selling high-end consumer durables at 68 stores in 16 states were used to examine the process through which ethical climate affects organizational variables. This is the first study offering empirical evidence that both job stress and job attitudes are the mechanisms through which a high ethical climate leads to lower turnover intention and higher job performance. Results indicate that ethical climate results in lower role conflict and role ambiguity and higher satisfaction, which, in turn, leads to lower turnover intention and organizational commitment. Also, findings indicate that organizational commitment is a significant predictor of job performance.

Journal ArticleDOI
TL;DR: In this article, the authors compared competing theories of consumer empowerment and details findings that examine the applicability of the theory to “ethical consumer” narratives, concluding that participants embraced a voting metaphor, either explicitly or implicitly, to view consumption as an ethical/political domain.
Abstract: Purpose – Increasing numbers of consumers are expressing concerns about reports of questionable corporate practices and are responding through boycotts and buycotts. This paper compares competing theories of consumer empowerment and details findings that examine the applicability of the theory to “ethical consumer” narratives. The nature and impact of consumer empowerment in consumer decision making is then discussed.Design/methodology/approach – The study takes an exploratory approach by conducting semi‐structured in‐depth interviews with a purposive sample of ten consumers. These were recruited from an “ethical product” fair in Scotland.Findings – Results indicate that the participating consumers embraced a voting metaphor, either explicitly or implicitly, to view consumption as an ethical/political domain. Setting their choices within perceived collective consumer behaviour, they characterised their consumption as empowering. This results in an ethical consumer project that can be seen as operating wit...

Journal ArticleDOI
TL;DR: This paper provides a review of statistical methods that are useful in conducting computer experiments and describes approaches for the two primary tasks of metamodeling: selecting an experimental design and fitting a statistical model.
Abstract: In this paper, we provide a review of statistical methods that are useful in conducting computer experiments. Our focus is on the task of metamodeling, which is driven by the goal of optimizing a complex system via a deterministic simulation model. However, we also mention the case of a stochastic simulation, and examples of both cases are discussed. The organization of our review first presents several engineering applications, it then describes approaches for the two primary tasks of metamodeling: (i) selecting an experimental design; and (ii) fitting a statistical model. Seven statistical modeling methods are included. Both classical and newer experimental designs are discussed. Finally, our own computational study tests the various metamodeling options on two two-dimensional response surfaces and one ten-dimensional surface.

Journal ArticleDOI
TL;DR: In this paper, the size dependence of the chemical ordering parameter S and selected magnetic properties of the L10-FePt phase has been investigated, including the Curie temperature, Tc, magnetization, etc.
Abstract: FePt nanoparticles have great application potential in advanced magnetic materials such as ultrahigh-density recording media and high-performance permanent magnets. The key for applications is the very high uniaxial magnetocrystalline anisotropy of the L10-FePt phase, which is based on crystalline ordering of the face-centered tetragonal (fct) structure, described by the chemical-ordering parameter S. Higher chemical ordering results in higher magnetocrystalline anisotropy. Unfortunately, as-synthesized FePt nanoparticles take a disordered face-centered cubic (fcc) structure that has low magnetocrystalline anisotropy. Heat-treatment is necessary to convert the fcc structure to the ordered fct structure. Several previous theoretical and experimental investigations have been reported on the size-dependent chemical ordering of FePt nanoparticles. It has been observed that the degree of ordering decreases with decreasing particle size of the sputtered FePt nanoparticles. Theoretical simulation predicted that the ordering would not take place when the particle size is below a critical value. However, there have not been systematic experimental studies on quantitative size dependence of chemical ordering of FePt nanoparticles due to the lack of monodisperse L10-FePt nanoparticles with controllable sizes. There are also few studies reported to date on the quantitative particle size dependence of magnetic properties, including the Curie temperature, coercivity, and magnetization of the L10-FePt phase, although it has been well accepted that there is a size effect on the ferromagnetism of any low-dimensional magnets. Additionally, the magnetic properties of FePt ferromagnets, as observed in thin-film samples, are affected by the degree of chemical ordering, which is in turn size dependent. It is therefore highly desirable to understand the size and chemical-ordering effects, and their influence on the magnetic properties of the nanoparticles. A major hurdle in obtaining the particle size dependence of structural and magnetic properties of the L10 phase is particle sintering during heat-treatments that convert the fcc phase to the fct phase. This long-pending problem has been solved recently by adopting the salt-matrix annealing technique. With this technique, particle aggregation during the phase transformation has been avoided so that the true size-dependent properties of the fct phase can be measured. In this paper, we report results on quantitative particle size dependence of the chemical-ordering parameter S and selected magnetic properties, including the Curie temperature, Tc, magnetization, Ms, and coercivity, Hc, with the particle size varying from 2 to 15 nm. Figure 1 shows the transmission electron microscopy (TEM) images of the FePt nanoparticles with different sizes before and after annealing in a salt matrix at 973 K for 4 h. The images, from left to right, show nanoparticles with nominal diameters of 2, 4, 6, 8, and 15 nm, respectively. The upper and lower rows are images of as-synthesized and salt-matrixannealed nanoparticles, respectively. As shown in Figure 1, the particle size is retained well upon annealing. Both the assynthesized and annealed nanoparticles are monodisperse with a standard deviation of 5–10 % in diameter. TEM observations also revealed that when the particle size is smaller than or equal to 8 nm, the fct nanoparticles are monocrystalline, whereas the 15 nm fct particles are polycrystalline. It is interesting to see that the L10 nanoparticles, tiny ferromagnets at room temperature, are dispersed very well without agglomeration despite the dipolar interaction between the particles, if a solvent with high viscosity is chosen and if the solution is diluted. Extensive TEM and X-ray diffraction (XRD) analyses have proved that the technique of salt-matrix annealing can be applied to heat-treatments of the FePt nanoparticles without leading to particle agglomeration and sintering, if a suitable salt-to-particle ratio and proper annealing conditions are chosen. Figure 2 shows the XRD patterns of the 4 nm, as-synthesized, fcc-structured nanoparticles and the particles annealed in a salt matrix at 873 K for 2 h, 973 K for 2 h, and 973 K for 4 h (from bottom to top), respectively. As shown in the figure, the positions of the (111) peaks shift in the higher-anC O M M U N IC A TI O N

Journal ArticleDOI
TL;DR: In this paper, the authors investigate the integrated effects of ethical climate and supervisory trust on salesperson's job attitudes and intentions to quit, and find that the effect of these two factors on quitting intentions is significant.
Abstract: This study builds on previous research to investigate the integrated effects of ethical climate and supervisory trust on salesperson’s job attitudes and intentions to quit. Responses from 344 sales...

Posted Content
TL;DR: In this paper, the authors examined the linkages between the audit and non-audit fees and accrual quality and found that higher audit effort and quality translate to a better accruality quality.
Abstract: This paper examines linkages between the audit and non-audit fees and accrual quality. We measure accrual quality by the Francis et al. (2005) modification of Dechow and Dichev (2002) measure. We posit that in settings where audit quality is compromised by a loss of auditor independence, managers use accruals more opportunistically and thereby drive down the accrual quality. Conversely, higher audit effort and quality translate to a better accrual quality. Our dependent variables are the relative magnitude of non-audit fees to audit fees, the absolute magnitudes of audit, non-audit and total fees. Results show that accrual quality has a significant negative association with the magnitude of non-audit fees but a significant positive association with audit fees. This latter result is consistent with the proposition that higher audit fee reflects higher audit effort and better judgments about the propriety of accruals, but is not consistent with the proposition that audit fee is associated with economic bonding.

Journal ArticleDOI
TL;DR: In this paper, the authors study several qualitative properties of the Degasperis-Procesi equation and prove the existence and uniqueness of global weak solutions to the equation provided the initial data satisfies appropriate conditions.

Journal ArticleDOI
TL;DR: In this paper, the authors examined the effect of participation in three types of development activities among salaried employees of a firm that significantly increased access to development after a series of layoffs in the late 1990s and found that on-the-job training was positively related to organisational commitment and negatively related to intention to turnover.
Abstract: Participation in three types of development activities is examined among salaried employees of a firm that significantly increased access to development after a series of layoffs in the late 1990s. Analyses of survey and archival data representing 667 employees show that on-the-job training was positively related to organisational commitment and negatively related to intention to turnover. Participation in tuition-reimbursement, which provides more general or marketable skills, was positively related to intention to turnover. However, intention to turnover was reduced after earning a degree through tuition-reimbursement if employees were subsequently promoted. Implications for an employment relationship based on ‘employability’ are discussed.

Journal ArticleDOI
TL;DR: The evolutionary history of SETMAR, a new primate chimeric gene resulting from fusion of a SET histone methyltransferase gene to the transposase gene of a mobile element, is reconstructed to provide insight into the conditions required for a successful gene fusion.
Abstract: The emergence of new genes and functions is of central importance to the evolution of species. The contribution of various types of duplications to genetic innovation has been extensively investigated. Less understood is the creation of new genes by recycling of coding material from selfish mobile genetic elements. To investigate this process, we reconstructed the evolutionary history of SETMAR, a new primate chimeric gene resulting from fusion of a SET histone methyltransferase gene to the transposase gene of a mobile element. We show that the transposase gene was recruited as part of SETMAR 40–58 million years ago, after the insertion of an Hsmar1 transposon downstream of a preexisting SET gene, followed by the de novo exonization of previously noncoding sequence and the creation of a new intron. The original structure of the fusion gene is conserved in all anthropoid lineages, but only the N-terminal half of the transposase is evolving under strong purifying selection. In vitro assays show that this region contains a DNA-binding domain that has preserved its ancestral binding specificity for a 19-bp motif located within the terminal-inverted repeats of Hsmar1 transposons and their derivatives. The presence of these transposons in the human genome constitutes a potential reservoir of ≈1,500 perfect or nearly perfect SETMAR-binding sites. Our results not only provide insight into the conditions required for a successful gene fusion, but they also suggest a mechanism by which the circuitry underlying complex regulatory networks may be rapidly established.

Journal ArticleDOI
TL;DR: This paper found that the difference between lax and aspirated stops has decreased, in some cases to the point of complete overlap, and the mean F 0 for words beginning with lax stops is significantly lower than the mean 0 for comparable words starting with tense or aspirated stops.
Abstract: Acoustic evidence suggests that contemporary Seoul Korean may be developing a tonal system, which is arising in the context of a nearly completed change in how speakers use voice onset time (VOT) to mark the language’s distinction among tense, lax and aspirated stops. Data from 36 native speakers of varying ages indicate that while VOT for tense stops has not changed since the 1960s, VOT differences between lax and aspirated stops have decreased, in some cases to the point of complete overlap. Concurrently, the mean F0 for words beginning with lax stops is significantly lower than the mean F0 for comparable words beginning with tense or aspirated stops. Hence the underlying contrast between lax and aspirated stops is maintained by younger speakers, but is phonetically manifested in terms of differentiated tonal melodies : laryngeally unmarked (lax) stops trigger the introduction of a default L tone, while laryngeally marked stops (aspirated and tense) introduce H, triggered by a feature specification for [stiff ].

Journal ArticleDOI
TL;DR: In this article, a framework of value chain agility and its theoretical underpinnings is presented, and the authors identify the drivers and determinants of VC agility as characteristics enabling flexibility within key components of a firm's VC.
Abstract: Purpose – To gain understanding of value chain (VC) agility in terms of value‐adding processes, this paper seeks to present a VC agility framework and then to develop the involved constructs.Design/methodology/approach – A framework of VC agility and its theoretical underpinnings is presented. Within the framework, drivers and determinants of VC agility are identified as characteristics enabling flexibility within key components of a firm's VC. Also, it is posited that information technology (IT) capability impacts the levels of achieved flexibility and agility, and that VC agility impacts business performance.Findings – From scale development, key determinants of flexibility within VC activities are identified. Correlation analysis suggests that firms derive higher levels of agility through integrating information across the VC rather than within VC activities. Firms with flexibility in their VC functions enjoy higher levels of ensuing VC agility and on‐time delivery, ROA, and market share.Research limit...

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the combined effect of leadership style and person-job fit on emotional exhaustion using a sample of employees that provided healthcare and social benefits to a large metropolitan county, and explored how the impact of emotional exhaustion on organizational deviance behaviors is mediated by employee's job satisfaction and organizational commitment.

Journal ArticleDOI
TL;DR: Drawing on the extensive literature in organizational theory and management, ambidexterity is advocated as a viable solution to systems development organizations attempting to harness the benefits of both agile and traditional development.
Abstract: Emerging evidence seems to indicate that most systems development organizations are attempting to utilize both agile and traditional approaches. This study aims to understand the reasons organizations feel the need for this unlikely juxtaposition and the organizational challenges in sustaining the opposing cultures. Drawing on the extensive literature in organizational theory and management, we advocate ambidexterity as a viable solution to systems development organizations attempting to harness the benefits of both agile and traditional development.

Journal ArticleDOI
TL;DR: In this paper, a first-principles study on strongly correlated monoclinic cupric oxide CuO has been performed by using the LDA+U+U method.
Abstract: A first-principles study on strongly correlated monoclinic cupric oxide CuO has been performed by using the $\mathrm{LSDA}+\mathrm{U}$ method. The optimized structural parameters of the crystal CuO are in good agreement with the experimental data. The electronic structures and magnetic properties calculated from the $\mathrm{LSDA}+\mathrm{U}$ method show that, in its ground state, CuO is a semiconducting, antiferromagnetic material with an indirect band gap of $1.0\phantom{\rule{0.3em}{0ex}}\mathrm{eV}$ and local magnetic moment per unit formula of $0.60{\ensuremath{\mu}}_{B}$, which agree with the experimental results. The carrier effective masses in CuO are larger than those in silicon, indicating smaller carrier mobilities. We have also investigated native point defects in CuO. Our results show that CuO is intrinsically a $p$-type semiconductor because Cu vacancies are the most stable defects in both Cu-rich and O-rich environments.

Journal ArticleDOI
TL;DR: The performance comparisons show that H.264 can achieve a coding efficiency improvement of about 1.5 times or greater for each test sequence related to multimedia, SDTV and HDTV.

Book ChapterDOI
18 Sep 2006
TL;DR: This paper describes a matrix decomposition formulation for Boolean data, the Discrete Basis Problem, and gives a simple greedy algorithm for solving it and shows how it can be solved using existing methods.
Abstract: Matrix decomposition methods represent a data matrix as a product of two smaller matrices: one containing basis vectors that represent meaningful concepts in the data, and another describing how the observed data can be expressed as combinations of the basis vectors. Decomposition methods have been studied extensively, but many methods return real-valued matrices. If the original data is binary, the interpretation of the basis vectors is hard. We describe a matrix decomposition formulation, the Discrete Basis Problem. The problem seeks for a Boolean decomposition of a binary matrix, thus allowing the user to easily interpret the basis vectors. We show that the problem is computationally difficult and give a simple greedy algorithm for solving it. We present experimental results for the algorithm. The method gives intuitively appealing basis vectors. On the other hand, the continuous decomposition methods often give better reconstruction accuracies. We discuss the reasons for this behavior.

Journal ArticleDOI
Abstract: The effects of downstream base-level control on fluvial architecture and geometry are well explored in several broadly similar sequence-stratigraphic models. Cretaceous Dakota Group strata, U.S. Western Interior, have characteristics reflecting combined downstream and upstream base-level controls that these models cannot address. Particularly, three layers of amalgamated channel-belt sandstone within this group thicken and are continuous for distances (≤ 300 km) along dip that stretch the reasonable lengths for which these models are intended to apply. As well, architecture in up-dip reaches records repeated valley-scale cut-and-fill cycles. This contrasts with equivalent strata down dip which record channel-scale lateral migration with no such valley-scale cycles apparent. We here introduce the concept of "buffers and buttresses" to address these observations. We assume that river longitudinal profiles are each anchored down dip to some physical barrier (e.g., the sea strand, etc.) that we refer to as a "buttress." Buttress shift is considered the primary downstream control on base level. Profiles extrapolated up dip from the buttress over any modeled duration of buttress shift can range widely because of high-frequency variability in upstream base-level controls (e.g., discharge, etc). All these potential profiles however are bounded above by the profile of highest possible aggradation, and below by the profile of maximum possible incision. These upper and lower profiles are "buffers," and they envelop the available fluvial preservation space. Thickness of the buffer zone is determined by variability in upstream controls and should increase up dip to the limit of downstream profile dominance. Dakota valley-scale surfaces record repeated cut-and-fill cycles driven by up-dip controls and are confined between thick stable buffers. Equivalent strata down dip record lateral reworking within a thinner channel-scale buffer zone that was positioned by downstream controls. Regression exposed slopes similar to the buffer zone, thus buffers were stable for long distances and durations. This prompted dip-extensive lateral reworking of strata into upstream valley-scale and downstream channel-scale sheets. Buffers and buttresses provide a broadly applicable model for fluvial preservation that captures upstream vs. downstream base-level controls on geometry and architecture. The model lends general insights into dip-oriented variations in fluvial architecture, production of sheet vs. lens geometry, total preservation volumes for fluvial systems, and variations in these factors related to contrasting climatic conditions and basin physiography. The model can be amended to existing sequence stratigraphic approaches in order to capture dip-oriented variations in sequence architecture.

Journal ArticleDOI
TL;DR: The MADRS showed about twice the precision in estimating depression as either the H RSD(17) or HRSD(6) for average severity of depression, and would be superior to the HRSDs in the conduct of clinical trials.

Journal ArticleDOI
TL;DR: It is pointed out that many languages make use of a number of valves, and that these valves are not articulations on a glottal continuum but represent a synergistic and hierarchical system of laryngeal articulations.
Abstract: The standard method of describing phonation for tone, vocal register, stress and other linguistic categories relies on the ‘continuum hypothesis’ that linguistic sounds are produced by means of glottal states determined by the aperture between the arytenoid cartilages, the endpoints of the voiceless–voiced continuum being ‘open glottis’ and ‘closed glottis’. This paper takes a different view, pointing out that many languages make use of a number of valves, and that these valves are not articulations on a glottal continuum but represent a synergistic and hierarchical system of laryngeal articulations. These valves constitute a principal source of phonological contrast, with an influence on how oral articulatory events are characterised.

Journal ArticleDOI
TL;DR: In this paper, the authors point out gaps in the current literature and examine the link between outsourcing implementation and firms' performance metrics by analysing hard data, and point out three main gaps: lack of objective metrics for outsourcing results evaluation, lack of research on the relationship between outsourcing implementations and companies' value, and lack of work on the outsourcing contract itself.
Abstract: Purpose – Outsourcing emerged as a popular operational strategy in the 1990s and most of current literature was established in the same time. However, the result of outsourcing is still vague. The purpose of this article is to point out gaps in the current literature and examine the link between outsourcing implementation and firms' performance metrics by analysing hard data.Design/methodology/approach – In this research, current outsourcing research (from 1990 to 2003) methodologies are grouped by five categories: case study, survey, conceptual framework, mathematical modeling, and financial data analyses; research scope is identified by three areas: outsourcing determinant, outsourcing process, and outsourcing result.Findings – This article figures out three main gaps in the current literature: lack of objective metrics for outsourcing results evaluation, lack of research on the relationship between outsourcing implementation and firms' value, and lack of research on the outsourcing contract itself.Rese...