scispace - formally typeset
Search or ask a question

Showing papers by "University of Portsmouth published in 2005"


Journal ArticleDOI
TL;DR: This critical review of polymers that can respond to external stimuli considers the types of stimulus response used in therapeutic applications and the main classes of responsive materials developed to date.
Abstract: Polymers that can respond to external stimuli are of great interest in medicine, especially as controlled drug release vehicles. In this critical review, we consider the types of stimulus response used in therapeutic applications and the main classes of responsive materials developed to date. Particular emphasis is placed on the wide-ranging possibilities for the biomedical use of these polymers, ranging from drug delivery systems and cell adhesion mediators to controllers of enzyme function and gene expression (134 references).

1,569 citations


Journal ArticleDOI
TL;DR: In this paper, the state of the art in using passive sampling technology for environmental monitoring of waterborne organic and inorganic pollutants is reviewed, and strategies for sampler design, calibration, in situ sampling and quality control issues are discussed.
Abstract: We review the state of the art in using passive sampling technology for environmental monitoring of waterborne organic and inorganic pollutants. We discuss strategies for sampler design, calibration, in situ sampling and quality-control issues, and advantages and challenges associated with passive sampling in aqueous environments. We then review typical applications of passive samplers in assessing the aquatic environment.

785 citations


Journal ArticleDOI
TL;DR: The Third Data Release of the Sloan Digital Sky Survey (SDSS) as mentioned in this paper contains data taken up through 2003 June, including imaging data in five bands over 5282 deg2, photometric and astrometric catalogs of the 141 million objects detected in these imaging data, and spectra of 528,640 objects selected over 4188 deg2.
Abstract: This paper describes the Third Data Release of the Sloan Digital Sky Survey (SDSS). This release, containing data taken up through 2003 June, includes imaging data in five bands over 5282 deg2, photometric and astrometric catalogs of the 141 million objects detected in these imaging data, and spectra of 528,640 objects selected over 4188 deg2. The pipelines analyzing both images and spectroscopy are unchanged from those used in our Second Data Release.

734 citations


Journal ArticleDOI
TL;DR: For the first time, mass spectra of intact proteins were obtained using laser desorption without adding a matrix in this study, demonstrating the applicability of ELDI to the analysis of proteins and synthetic organic compounds.
Abstract: A new method of electrospray-assisted laser desorption/ionization (ELDI) mass spectrometry, which combines laser desorption with post-ionization by electrospray, was applied to rapid analysis of solid materials under ambient conditions. Analytes were desorbed from solid metallic and insulating substrata using a pulsed nitrogen laser. Post-ionization produced high-quality mass spectra characteristic of electrospray, including protein multiple charging. For the first time, mass spectra of intact proteins were obtained using laser desorption without adding a matrix. Bovine cytochrome c and an illicit drug containing methaqualone were chosen in this study to demonstrate the applicability of ELDI to the analysis of proteins and synthetic organic compounds.

482 citations


Journal ArticleDOI
TL;DR: Hibiscus sabdariffa L. (English: roselle, red sorrel; Arabic: karkade), the calyces of which are used in many parts of the world to make cold and hot drinks, contains ascorbic acid (vitamin C).
Abstract: This article reviews the reported phytochemical, pharmacological and toxicological properties of Hibiscus sabdariffa L. (English: roselle, red sorrel; Arabic: karkade), the calyces of which are used in many parts of the world to make cold and hot drinks. Nutritionally, these contain ascorbic acid (vitamin C). In folk medicine, the calyx extracts are used for the treatment of several complaints, including high blood pressure, liver diseases and fever. The pharmacological actions of the calyx extracts include strong in vitro and in vivo antioxidant activity. In rats and rabbits, the extract showed antihypercholesterolaemic, antinociceptive and antipyretic, but not antiinflammatory activities. In rat and man a strong antihypertensive action has been demonstrated. The effects of the calyx extracts on smooth muscles in vitro are variable, but they mostly inhibit the tone of the isolated muscles. In healthy men, consumption of H. sabdariffa has resulted in significant decreases in the urinary concentrations of creatinine, uric acid, citrate, tartrate, calcium, sodium, potassium and phosphate, but not oxalate. Oil extracted from the plant's seeds has been shown to have an inhibitory effect on some bacteria and fungi in vitro. The plant extracts are characterized by a very low degree of toxicity. The LD50 of H. sabdariffa calyx extract in rats was found to be above 5000 mg/kg. A single report has suggested that excessive doses for relatively long periods could have a deleterious effect on the testes of rats. In view of its reported nutritional and pharmacological properties and relative safety, H. sabdariffa and compounds isolated from it (for example, anthocyanins and Hibiscus protocatechuic acid) could be a source of therapeutically useful products.

420 citations


Journal ArticleDOI
TL;DR: The data suggest that the unique spatial distribution of di- and tri-Me K36/H3 plays a role in transcriptional termination and/or early RNA processing in higher eukaryotes.

409 citations


Journal ArticleDOI
TL;DR: The author deals with the problems of power and resistance, distinguishing truth from authenticity, the possibility of consent if knowing is a problem for both the interviewer and the interviewee, and the nature and significance of stories and the self.
Abstract: Despite the popularity of the interview in qualitative research, methodological and theoretical problems remain. In this article, the author critically examines some of these problems for the researcher. He deals with the problems of power and resistance, distinguishing truth from authenticity, the (im)possibility of consent if knowing is a problem for both the interviewer and the interviewee, and the nature and significance of stories and the self. Although it is not always possible to address these problems directly, the author seeks in this article to create a dialogue with all of us for whom the interview is judged to be the appropriate answer to the research question “How can I know...?”

391 citations


Journal ArticleDOI
TL;DR: The previously published POSSUM predictor equation for mortality performed badly when tested using a standard test of goodness of fit for logistic regression and must be modified.
Abstract: POSSUM (Physiological and Operative Severity Score for the enUmeration of Morbidity and mortality) has been studied as a possible surgical audit system for a 9-month interval using a sample of 28 per cent of the general surgical workload. Mortality or survival was analysed as an endpoint. In this sample the published POSSUM predictor equation for mortality overpredicted deaths by a factor of more than two. The bulk of the overprediction occurred in the group at lowest risk (predicted mortality 10 per cent or less), in which death was overpredicted by a factor of six. This is the most important group for audit purposes since it contains the majority of surgical patients and is composed of fit patients undergoing minor surgery. The published predictor equation for mortality returns a minimum predicted mortality of 1.08 per cent, clearly far higher than that expected for a fit patient having minor surgery. Logistic regression was done on a set of 1485 surgical episodes to generate a local predictor equation for mortality. This process gave a predictor equation that fitted well with the observed mortality rate and gave a minimum predicted risk of mortality of 0.20 per cent. The previously published POSSUM predictor equation for mortality performed badly when tested using a standard test of goodness of fit for logistic regression and must be modified.

367 citations


Journal ArticleDOI
TL;DR: In this article, the authors reviewed the available SVA research, including the accuracy of Criteria-Based Content Analysis (CBCA; part of SVA), interrater agreement between CBCA coders, frequency of occurrence of CBCA criteria in statements, correlations between CBCAs scores and interviewee's age and social and verbal skills, and issues regarding the Validity Checklist.
Abstract: Statement Validity Assessment (SVA) is used to assess the veracity of child witnesses’ testimony in trials for sexual offences. The author reviewed the available SVA research. Issues addressed include the accuracy of Criteria-Based Content Analysis (CBCA; part of SVA), interrater agreement between CBCA coders, frequency of occurrence of CBCA criteria in statements, the correlations between CBCA scores and (i) interviewer’s style and (ii) interviewee’s age and social and verbal skills, and issues regarding the Validity Checklist (another part of SVA). Implications for the use of SVA assessments in criminal courts are discussed. It is argued that SVA evaluations are not accurate enough to be admitted as expert scientific evidence in criminal courts but might be useful in police investigations.

359 citations


Journal ArticleDOI
TL;DR: A conceptual model of an analytical CRM system for customer knowledge acquisition is developed based on the findings and literature review and shed light on the potential area in which organisations can strategically use CRM systems.
Abstract: Purpose – This paper aims to examine how customer relationship management (CRM) systems are implemented in practice with a focus on the strategic application, i.e. how analytical CRM systems are used to support customer knowledge acquisition and how such a system can be developed.Design/methodology/approach – The current practice of CRM application is based on examining data reported from a four‐year survey of CRM applications in the UK and an evaluation of CRM analytical functions provided by 20 leading software vendors. A conceptual model of an analytical CRM system for customer knowledge acquisition is developed based on the findings and literature review.Findings – Current CRM systems are dominated by operational applications such as call centres. The application of analytical CRM has been low, and the provision of these systems is limited to a few leading software vendors.Practical implications – The findings shed light on the potential area in which organisations can strategically use CRM systems. I...

346 citations


Journal ArticleDOI
TL;DR: The C4 cluster catalog as discussed by the authors is a collection of 748 clusters of galaxies identified in the spectroscopic sample of the Second Data Release (DR2) of the Sloan Digital Sky Survey (SDSS).
Abstract: We present the C4 Cluster Catalog, a new sample of 748 clusters of galaxies identified in the spectroscopic sample of the Second Data Release (DR2) of the Sloan Digital Sky Survey (SDSS). The C4 cluster-finding algorithm identifies clusters as overdensities in a seven-dimensional position and color space, thus minimizing projection effects that have plagued previous optical cluster selection. The present C4 catalog covers ~2600 deg2 of sky and ranges in redshift from z = 0.02 to 0.17. The mean cluster membership is 36 galaxies (with measured redshifts) brighter than r = 17.7, but the catalog includes a range of systems, from groups containing 10 members to massive clusters with over 200 cluster members with measured redshifts. The catalog provides a large number of measured cluster properties including sky location, mean redshift, galaxy membership, summed r-band optical luminosity (Lr), and velocity dispersion, as well as quantitative measures of substructure and the surrounding large-scale environment. We use new, multicolor mock SDSS galaxy catalogs, empirically constructed from the ΛCDM Hubble Volume (HV) Sky Survey output, to investigate the sensitivity of the C4 catalog to the various algorithm parameters (detection threshold, choice of passbands, and search aperture), as well as to quantify the purity and completeness of the C4 cluster catalog. These mock catalogs indicate that the C4 catalog is 90% complete and 95% pure above M200 = 1 × 1014 h-1 M⊙ and within 0.03 ≤ z ≤ 0.12. Using the SDSS DR2 data, we show that the C4 algorithm finds 98% of X-ray–identified clusters and 90% of Abell clusters within 0.03 ≤ z ≤ 0.12. Using the mock galaxy catalogs and the full HV dark matter simulations, we show that the Lr of a cluster is a more robust estimator of the halo mass (M200) than the galaxy line-of-sight velocity dispersion or the richness of the cluster. However, if we exclude clusters embedded in complex large-scale environments, we find that the velocity dispersion of the remaining clusters is as good an estimator of M200 as Lr. The final C4 catalog will contain 2500 clusters using the full SDSS data set and will represent one of the largest and most homogeneous samples of local clusters.

Journal ArticleDOI
TL;DR: Health promotion activities have long been considered important in providing support to develop social contacts but there is lack of evidence as to the effectiveness of interventions and the value of one-to-one interventions is unclear.
Abstract: The importance of tackling loneliness and social isolation to improve older people's well-being and quality of life is increasingly recognised. Health promotion activities have long been considered important in providing support to develop social contacts but there is lack of evidence as to the effectiveness of interventions. The value of one-to-one interventions is unclear. Only one of the 11 studies reviewed was deemed to be useful and then the long-term effect was not maintained. Older people emphasise the need for reciprocity in social support so the visitor should be of similar age with common interests and a shared cultural and social background.

Journal ArticleDOI
TL;DR: Because individual differences will determine the extent to which students utilise this facility, it is suggested that future research should focus on developing online learning environments that incorporate activities with both a beneficial influence on learning and appeal to a wide student population.
Abstract: There has been much recent research examining online learning in universities, but two questions seem to have been largely overlooked in this context: (1) Which students voluntarily utilise web-based learning; and (2) Does this use influence their academic achievement? The current study aimed to determine whether the approaches to studying, ability, age, and gender of 110 undergraduates in the second year of a psychology degree predicted the extent to which they utilised online learning using Web Course Tools (WebCT) in support of a core Biological Psychology unit. Data were obtained from WebCT's student tracking system, Entwistle and Ramsden's 18-item Approaches to Studying Inventory (1983) and academic records. Multiple linear regressions and discriminant function analysis were used to examine whether individual differences predicted WebCT use, while analysis of covariance determined whether web use influenced academic achievement. The number of hits, length of access, and use of the bulletin board was predicted by age, with older students using WebCT more. These factors were also influenced by ability and achievement orientation. The degree of participation in self-assessment was not predicted by student variables, but, of those that repeated an online quiz, improvement was more likely in those with lower achievement orientation. Only bulletin board use influenced achievement, with those posting messages outperforming those not using, or passively using bulletin boards. However, because individual differences will determine the extent to which students utilise this facility, it is suggested that future research should focus on developing online learning environments that incorporate activities with both a beneficial influence on learning and appeal to a wide student population.

Journal ArticleDOI
TL;DR: The third edition of the Sloan Digital Sky Survey (SDSS) Quasar catalog as discussed by the authors contains 46,420 objects in the SDSS Third Data Release that have luminosities larger than Mi = -22 (in a cosmology with H0 = 70 km s-1 Mpc-1, ΩM = 0.3, and ΩΛ =0.7), have at least one emission line with FWHM larger than 1000 km-s-1 or are unambiguously broad absorption line quasars, are fainter than
Abstract: We present the third edition of the Sloan Digital Sky Survey (SDSS) Quasar Catalog. The catalog consists of the 46,420 objects in the SDSS Third Data Release that have luminosities larger than Mi = -22 (in a cosmology with H0 = 70 km s-1 Mpc-1, ΩM = 0.3, and ΩΛ = 0.7), have at least one emission line with FWHM larger than 1000 km s-1 or are unambiguously broad absorption line quasars, are fainter than i = 15.0, and have highly reliable redshifts. The area covered by the catalog is ≈4188 deg2. The quasar redshifts range from 0.08 to 5.41, with a median value of 1.47; the high-redshift sample includes 520 quasars at redshifts greater than 4, of which 17 are at redshifts greater than 5. For each object the catalog presents positions accurate to better than 02 rms per coordinate, five-band (ugriz) CCD-based photometry with typical accuracy of 0.03 mag, and information on the morphology and selection method. The catalog also contains radio, near-infrared, and X-ray emission properties of the quasars, when available, from other large-area surveys. The calibrated digital spectra cover the wavelength region 3800–9200 A at a spectral resolution of 2000; the spectra can be retrieved from the public database using the information provided in the catalog. A total of 44,221 objects in the catalog were discovered by the SDSS; 28,400 of the SDSS discoveries are reported here for the first time.

Journal ArticleDOI
TL;DR: In this paper, an 8 σ detection of cosmic magnification measured by the variation of quasar density due to gravitational lensing by foreground large-scale structure is presented, which is in good agreement with theoretical predictions based on the WMAP concordance cosmology.
Abstract: We present an 8 σ detection of cosmic magnification measured by the variation of quasar density due to gravitational lensing by foreground large-scale structure. To make this measurement we used 3800 deg2 of photometric observations from the Sloan Digital Sky Survey (SDSS) containing ~200,000 quasars and 13 million galaxies. Our measurement of the galaxy-quasar cross-correlation function exhibits the amplitude, angular dependence, and change in sign as a function of the slope of the observed quasar number counts that is expected from magnification bias due to weak gravitational lensing. We show that observational uncertainties (stellar contamination, Galactic dust extinction, seeing variations, and errors in the photometric redshifts) are well controlled and do not significantly affect the lensing signal. By weighting the quasars with the number count slope, we combine the cross-correlation of quasars for our full magnitude range and detect the lensing signal at >4 σ in all five SDSS filters. Our measurements of cosmic magnification probe scales ranging from 60 h-1 kpc to 10 h-1 Mpc and are in good agreement with theoretical predictions based on the WMAP concordance cosmology. As with galaxy-galaxy lensing, future measurements of cosmic magnification will provide useful constraints on the galaxy-mass power spectrum.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss how photography might help give research participants a louder voice in critical accounting and management research, enabling their multiple voices to be better represented/performed through the technique of "native image making".
Abstract: Purpose – The main objective of this paper is to discuss how photography might help give research participants a louder voice in (qualitative) critical accounting and management research, enabling their multiple voices to be better represented/performed through the technique of “native image making”. A secondary aim is to familiarise the reader with key developments and debates in the field of “visual research” more generally. Design/methodology/approach – A brief overview of the field is offered, and, drawing on examples from the author's visual research practice, how the concept of “photo‐voice” might increase participants' involvement in research in two ways is discussed. Findings – First, it is argued that accessibility of the method, control of the research agenda and ownership of the images give a louder voice in the process of research. Second, and following Barthes, it is contended that through their iconic and quasi‐representational nature, photographic images can communicate participants' views of their worlds with more primacy than language alone, raising their voices in the dissemination of research. Practical implications – The paper has especial implications for researchers engaged in critical studies of accounting and management seeking to give voice to marginal groups of people traditionally disregarded by mainstream organization/management studies. Originality/value – The paper contributes to the development of a novel qualitative methodology for accounting and management research.

Journal ArticleDOI
TL;DR: This is the first study to demonstrate a direct relationship between tissue-diet isotopic spacing in N and growth rate and adds to the growing list of factors known to influence the level of isotopic separation between a consumer's tissue and that of its diet.
Abstract: The difference in isotopic composition between a consumer’s tissues and that of its diet is a critical aspect of the use of stable isotope analyses in ecological and palaeoecological studies. In a controlled feeding experiment with the Atlantic salmon, Salmo salar, we demonstrate for the first time that the value of tissue-diet isotope spacing in nitrogen in a growing animal is not constant, but varies inversely with growth rate. The value of tissue-diet isotopic spacing in N reflects N use efficiency. Thus, in salmon, growth rate is accompanied by, or requires, increased N use efficiency. The total range in tissue-diet isotopic spacing in N seen in the experimental population of 25 fish was 1%, approximately 50% of the total trophic shift. Mean equilibrium tissue-diet isotopic spacing (� standard deviation) in salmon averaged 2.3% (� 0.3%) and 0.0% (� 0.3%) for N in muscle and liver, respectively, and 2.1% (� 0.1%) and 1.6% (� 0.3%) for C in muscle and liver, respectively. Feeding with a mixed dietary source (wheat and fish-meal origin) resulted in tissue-diet isotopic fractionation in both C and N due to the differential digestibility of food components with distinct isotopic composition. The rate of change in isotopic composition of S. salar tissues was dominated by growth, but the estimated contribution of metabolic turnover to change in tissue N was relatively high for an ectothermic animal at ca. 20–40%. The estimated half-life for metabolic turnover of the tissue N pool was ca. 4 months in both muscle and liver tissue. This is the first study to demonstrate a direct relationship between tissue-diet isotopic spacing in N and growth rate and adds to the growing list of factors known to influence the level of isotopic separation between a consumer’s tissue and that of its diet. Copyright # 2005 John Wiley & Sons, Ltd. Stable isotope analysis (SIA) is commonly used to infer diet and trophic level in ecosystem studies. SIA offers advantages over gut content analysis as a method to study ecosystem structure because the isotopic composition of animal tissue reflects the average diet assimilated over a length of time, usually of the order of weeks to months. SIA may also be performed retrospectively using archived, historic or archaeological materials, 1,2 allowing reconstruction of ecosystem or

Journal ArticleDOI
TL;DR: Key factors controlling recognition and release by imprinted polymer matrices are discussed, the current limiting factors in their properties arising from the synthesis of these materials are considered, and the future prospects for imprinted polymers in drug delivery are outlined.

Journal ArticleDOI
TL;DR: In this article, the authors reported the intermediate-scale (0.3-40 h-1 Mpc) clustering of 35,000 luminous early-type galaxies at redshifts 0.16-0.44.
Abstract: We report the intermediate-scale (0.3-40 h-1 Mpc) clustering of 35,000 luminous early-type galaxies at redshifts 0.16-0.44 from the Sloan Digital Sky Survey. We present the redshift space two-point correlation function ξ(s), the projected correlation function wp(rp), and the deprojected real space correlation function ξ(r), for approximately volume-limited samples. As expected, the galaxies are highly clustered, with the correlation length varying from 9.8 ± 0.2 to 11.2 ± 0.2 h-1 Mpc, dependent on the specific luminosity range. For the -23.2 < Mg < -21.2 sample, the inferred bias relative to that of L* galaxies is 1.84 ± 0.11 for 1 h-1 Mpc < rp 10 h-1 Mpc, with yet stronger clustering on smaller scales. We detect luminosity-dependent bias within the sample but see no evidence for redshift evolution between z = 0.2 and z = 0.4. We find a clear indication for deviations from a power-law in the real space correlation function, with a dip at ~2 h-1 Mpc scales and an upturn on smaller scales. The precision measurements of these clustering trends offer new avenues for the study of the formation and evolution of these massive galaxies.

Journal ArticleDOI
TL;DR: This article used the Two-Degree Field (2dF) instrument on the Anglo-Australian Telescope (AAT) to obtain redshifts of a sample of z 21 deep surveys for quasars.
Abstract: We have used the Two-Degree Field (2dF) instrument on the Anglo-Australian Telescope (AAT) to obtain redshifts of a sample of z 21 deep surveys for quasars. The 2SLAQ data exhibit no well-defined 'break' in the number counts or luminosity function, but do clearly flatten with increasing magnitude. Finally, we find that the shape of the quasar luminosity function derived from 2SLAQ is in good agreement with that derived from Type I quasars found in hard X-ray surveys.

Journal ArticleDOI
TL;DR: A current perspective on material/microbe interactions pertinent to biocorrosion and biofouling is offered, with EPS as a focal point, while emphasizing the role atomic force spectroscopy and mass spectrometry techniques can play in elucidating such interactions.
Abstract: The presence of microorganisms on material surfaces can have a profound effect on materials performance. Surface-associated microbial growth, i.e. a biofilm, is known to instigate biofouling. The presence of biofilms may promote interfacial physico-chemical reactions that are not favored under abiotic conditions. In the case of metallic materials, undesirable changes in material properties due to a biofilm (or a biofouling layer) are referred to as biocorrosion or microbially influenced corrosion (MIC). Biofouling and biocorrosion occur in aquatic and terrestrial habitats varying in nutrient content, temperature, pressure and pH. Interfacial chemistry in such systems reflects a wide variety of physiological activities carried out by diverse microbial populations thriving within biofilms. Biocorrosion can be viewed as a consequence of coupled biological and abiotic electron-transfer reactions, i.e. redox reactions of metals, enabled by microbial ecology. Microbially produced extracellular polymeric substances (EPS), which comprise different macromolecules, mediate initial cell adhesion to the material surface and constitute a biofilm matrix. Despite their unquestionable importance in biofilm development, the extent to which EPS contribute to biocorrosion is not well-understood. This review offers a current perspective on material/microbe interactions pertinent to biocorrosion and biofouling, with EPS as a focal point, while emphasizing the role atomic force spectroscopy and mass spectrometry techniques can play in elucidating such interactions. [Int Microbiol 2005; 8(3):157-168]

Journal ArticleDOI
TL;DR: It is found that depression affects the allocation of attention and all elements of working memory and is proposed that the source of general disruption in both depressed and anxious patients may be a competition between attempts to direct attentional resources to the task in hand and away from the distractive and intrusive effects of automatic negative thoughts.
Abstract: Introduction. Both Channon, Baker, and Robertson (1993) and Hartlage, Alloy, Vazquez, and Dykman (1993) claim that working memory impairment in depressed patients is limited to Baddeley's (1996) central executive and does not affect either the phonological loop or the visuospatial scratchpad. Our key questions were: (1) is there an impairment of working memory in depression and which elements does it effect; (2) is another major clinical group also affected and in what ways, and finally, (3) how do these groups vary when compared with each other and with normals? Thus we sought to locate a depression-specific effect and define its extent. Methods. We tested 35 depressed patients, using both 24 anxiety patients and 29 normal controls as comparisons. Several tasks were used so that we could differentiate between the three key aspects of working memory. Results. Contrary to Channon et al., we found that depression affects the allocation of attention and all elements of working memory. The depression group sh...

Journal ArticleDOI
TL;DR: In this paper, an analysis of the color-magnitude-velocity dispersion relation for a sample of 39,320 early-type galaxies within the Sloan Digital Sky Survey is presented.
Abstract: We present an analysis of the color–magnitude–velocity dispersion relation for a sample of 39,320 early-type galaxies within the Sloan Digital Sky Survey. We demonstrate that the color-magnitude relation is entirely a consequence of the fact that both the luminosities and colors of these galaxies are correlated with stellar velocity dispersions. Previous studies of the color-magnitude relation over a range of redshifts suggest that the luminosity of an early-type galaxy is an indicator of its metallicity, whereas residuals in color from the relation are indicators of the luminosity-weighted age of its stars. We show that this, when combined with our finding that velocity dispersion plays a crucial role, has a number of interesting implications. First, galaxies with large velocity dispersions tend to be older (i.e., they scatter redward of the color-magnitude relation). Similarly, galaxies with large dynamical mass estimates also tend to be older. In addition, at fixed luminosity galaxies that are smaller or have larger velocity dispersions or are more massive tend to be older. Second, models in which galaxies with the largest velocity dispersions are also the most metal-poor are difficult to reconcile with our data. However, at fixed velocity dispersion galaxies have a range of ages and metallicities: the older galaxies have smaller metallicities and vice versa. Finally, a plot of velocity dispersion versus luminosity can be used as an age indicator: lines of constant age run parallel to the correlation between velocity dispersion and luminosity.

Journal ArticleDOI
TL;DR: Despite the widespread use of the term “mental toughness” by performers, coaches and sport psychology consultant's alike, it is only recently that researchers (e.g., Jones, Hanton, & Connaughton, 2...
Abstract: Despite the widespread use of the term “mental toughness” by performers, coaches and sport psychology consultant's alike, it is only recently that researchers (e.g., Jones, Hanton, & Connaughton, 2...

Book ChapterDOI
01 Jan 2005
TL;DR: This chapter presents the methodology of Multiple-Criteria Decision Aiding based on preference modelling in terms of “if…, then …” decision rules, and presents some basic applications of this approach, starting from multiple-criteria classification problems, and going through decision under uncertainty, hierarchical decision making, classification problems with partially missing information, problems with imprecise information modelled by fuzzy sets, and some classical problems of operations research.
Abstract: In this chapter we present the methodology of Multiple-Criteria Decision Aiding (MCDA) based on preference modelling in terms of “if…, then …” decision rules. The basic assumption of the decision rule approach is that the decision maker (DM) accepts to give preference information in terms of examples of decisions and looks for simple rules justifying her decisions. An important advantage of this approach is the possibility of handling inconsistencies in the preference information, resulting from hesitations of the DM. The proposed methodology is based on the elementary, natural and rational principle of dominance. It says that if action x is at least as good as action y on each criterion from a considered family, then x is also comprehensively at least as good as y. The set of decision rules constituting the preference model is induced from the preference information using a knowledge discovery technique properly adapted in order to handle the dominance principle. The mathematical basis of the decision rule approach to MCDA is the Dominance-based Rough Set Approach (DRSA) developed by the authors. We present some basic applications of this approach, starting from multiple-criteria classification problems, and then going through decision under uncertainty, hierarchical decision making, classification problems with partially missing information, problems with imprecise information modelled by fuzzy sets, until multiple-criteria choice and ranking problems, and some classical problems of operations research. All these applications are illustrated by didactic examples whose aim is to show in an easy way how DRSA can be used in various contexts of MCDA.

Journal ArticleDOI
TL;DR: The main prediction was that observers would obtain higher accuracy rates if the evidence against the suspects was presented in a late rather than early stage of the interrogation, and this prediction received support.
Abstract: Deception detection research has largely neglected an important aspect of many investigations, namely that there often exists evidence against a suspect. This study examined the potentials of timing of evidence disclosure as a deception detection tool. The main prediction was that observers (N = 116) would obtain higher accuracy rates if the evidence against the suspects (N = 58) was presented in a late rather than early stage of the interrogation. This prediction was based on the idea that late evidence disclosure would trigger lack of consistencies between the liars’ stories and the evidence; this could be used as a cue to deception. The main prediction received support. Late disclosure observers obtained an overall accuracy of 61.7%, compared to 42.9% of Early disclosure observers. Deceptive statements were identified with high accuracy (67.6%) in Late disclosure, indicating that the technique in this form is beneficial mainly for pinpointing lies.

Journal ArticleDOI
TL;DR: In this paper, the interannual variability of both surface and free-air temperature anomalies and the surface/free air temperature difference (ΔT) was examined at each location for the period 1948-2002.
Abstract: [1] Surface and free-air temperature observations from the period 1948–2002 are compared for 1084 surface locations at high elevations (>500 m) on all continents Mean monthly surface temperatures are obtained from two homogeneity adjusted data sets: Global Historical Climate Network (GHCN) and Climatic Research Unit (CRU) Free-air temperatures are interpolated both vertically and horizontally from the National Centers for Environmental Prediction/National Center for Atmospheric Research Reanalysis R1 25° grids at given pressure levels The compatibility of surface and free-air observations is assessed by examination of the interannual variability of both surface and free-air temperature anomalies and the surface/free-air temperature difference (ΔT) Correlations between monthly surface and free-air anomalies are high The correlation is influenced by topography, valley bottom sites showing lower values, because of the influence of temporally sporadic boundary layer effects The annual cycle of the derived surface/free-air temperature difference (ΔT) demonstrates physically realistic variability Cluster analysis shows coherent ΔT regimes, which are spatially organized Temporal trends in surface and free-air temperatures and ΔT are examined at each location for 1948–1998 Surface temperatures show stronger, more statistically robust and widespread warming than free-air temperatures Thus ΔT is increasing significantly at the majority of sites (>70%) A sensitivity analysis of trend magnitudes shows some reliance on the time period used ΔT trend variability is dominated by surface trend variability because free-air trends are weak, but it is possible that reanalysis trends are unrealistically small Results are sensitive to topography, with mountaintop sites showing weaker ΔT increases than other sites (although still positive) There is no strong relationship between any trend magnitudes and elevation Since ΔT change is dependent on location, it is clear that temperatures at mountain sites are changing in ways contrasting to free air

Journal ArticleDOI
TL;DR: In this article, the authors explore the potential role of entrepreneurship in public sector organizations and propose five distinct types of entrepreneurial agents in the public sector: professional politician, spin-off creator, business entrepreneur in politics, career-driven public officer, and politically ambitious public officer.
Abstract: In this paper we explore the potential role of entrepreneurship in public sector organizations. At first, we present a review of the entrepreneurship theme in the political science and public management research streams, comparing these ideas with the mainstream business literature on entrepreneurship. Thereafter, we illustrate empirically how Stevenson's classical framework of entrepreneurship can be applied in a European local government context to explain the recent initiatives to compete for and utilize European Union structural funds. The empirical basis of the study is comprised of ten in-depth case studies of local government organizations, five in the UK and five in Italy. Finally, we propose five distinct types of entrepreneurial agents in the public sector: professional politician; spin-off creator; business entrepreneur in politics; career-driven public officer; and politically ambitious public officer.

Journal ArticleDOI
TL;DR: A affinity-purified antibodies against the N-terminal region of H2A.Z are used in native chromatin immmuno-precipitation experiments with mononucleosomes from three chicken cell types to resolve an apparent dichotomy in its reported links to gene expression and preventing the spread of heterochromatin in yeast and mammals.
Abstract: The replacement histone H2A.Z is variously reported as being linked to gene expression and preventing the spread of heterochromatin in yeast, or concentrated at heterochromatin in mammals. To resolve this apparent dichotomy, affinity-purified antibodies against the N-terminal region of H2A.Z, in both a triacetylated and non-acetylated state, are used in native chromatin immmuno-precipitation experiments with mononucleosomes from three chicken cell types. The hyperacetylated species concentrates at the 5 0 end of active genes, both tissue specific and housekeeping but is absent from inactive genes, while the unacetylated form is absent from both active and inactive genes. A concentration of H2A.Z is also found at insulators under circumstances implying a link to barrier activity but not to enhancer blocking. Although acetylated H2A.Z is widespread throughout the interphase genome, at mitosis its acetylation is erased, the unmodified form remaining. Thus, although H2A.Z may operate as an epigenetic marker for active genes, its N-terminal acetylation does not.

Journal ArticleDOI
TL;DR: In this article, a simple method for using bootstrap resampling to derive confidence intervals is described, which can be used for a wide variety of statistics, including the mean and median, the difference of two means or proportions, and correlation and regression coefficients.
Abstract: Confidence intervals are in many ways a more satisfactory basis for statistical inference than hypothesis tests. This article explains a simple method for using bootstrap resampling to derive confidence intervals. This method can be used for a wide variety of statistics—including the mean and median, the difference of two means or proportions, and correlation and regression coefficients. It can be implemented by an Excel spreadsheet, which is available to readers on the Web. The rationale behind the method is transparent, and it relies on almost no sophisticated statistical concepts.